Forgot your password?
typodupeerror
Intel Hardware

Details of New Intel Dunnington and Nehalem Architectures Leaked 147

Posted by ScuttleMonkey
from the aren't-leaks-just-more-effective-pr-these-days dept.
Daily Tech is reporting that details about Intel's new processor models were leaked over the weekend. Both the six core Dunnington and Nehalem architectures were featured in this leak. "Dunnington includes 16MB of L3 cache shared by all six processors. Each pair of cores can also access 3MB of local L2 cache. The end result is a design very similar to the AMD Barcelona quad-core processor; however, each Barcelona core contains 512KB L2 cache, whereas Dunnington cores share L2 cache in pairs. [...] Nehalem is everything Penryn is -- 45nm, SSE4, quad-core -- and then some. For starters, Intel will abandon the front-side bus model in favor of QuickPath Interconnect; a serial bus similar to HyperTransport."
This discussion has been archived. No new comments can be posted.

Details of New Intel Dunnington and Nehalem Architectures Leaked

Comments Filter:
  • by Anonymous Coward on Monday February 25, 2008 @02:48PM (#22548838)
    Sounds like good names to be used in a D&D game!

    Sir Dunnington against the evil lich lord Nehalem!
    • by milsoRgen (1016505) on Monday February 25, 2008 @03:21PM (#22549246) Homepage

      Sounds like good names to be used in a D&D game!
      I've always liked the way Intel code names their processors, as I was born and raised in Tillamook [wikipedia.org], which had it's own Mobile Processor [findarticles.com]. Nehalem [wikipedia.org], is in fact another city in Tillamook County, Oregon. Some of you might remember Nehalem's prior claim to fame was an Everclear [wikipedia.org] song on their breakthrough album Sparkle and Fade [amazon.com], entitled simply 'Nehalem' [sing365.com].
      • by jgarra23 (1109651)

        I've always liked the way Intel code names their processors, as I was born and raised in Tillamook, which had it's own Mobile Processor. Nehalem, is in fact another city in Tillamook County, Oregon. Some of you might remember Nehalem's prior claim to fame was an Everclear song on their breakthrough album Sparkle and Fade, entitled simply 'Nehalem'.


        Don't forget the notorious Willamette chip!! Though I'm not sure if anyone wants to be known for that...
      • Yeah, you beat me to it. "They say you're losing your mind, they say you're leaving Nehalem" ... In college I actually had a Diablo II warrior named "Nehalem" ... barbarian who specialized in warcrys :P
      • by Muad'Dave (255648)

        Good cheese [tillamookcheese.com]. And ice cream.

        • I drive the coast from California to Washington state 4-5 times a year, and each direction, it is a must to stop at the Tillamook factory and drop $100 or so on cheese and whatever. Best Sour Cream in the world, too. My office has 10 or so coffee mugs from the place. The city also has a decent Air and Space museum too.
          • by Z80xxc! (1111479)
            That's not Tillamook, that's evergreen, if you're talking about the place I think you're talking about. But yes, it is awesome.

            All the Oregon names are of course because Intel has some major facilities in Oregon, mostly in Washington County, around the Beaverton area. They are in fact Oregon's biggest employer.
            • That's not Tillamook, that's evergreen, if you're talking about the place I think you're talking about. But yes, it is awesome.

              WTF are you smoking? The only other place worth mentioning in the same vein was Bandon, OR. [wikipedia.org] Whom the Tillamook County Creamery Association bought out several years ago and now sells cheese under that label, that for all intents and purposes is the same as Tillamook's main label. In regards to recipe and ingredients. The poser you are replying to was in fact correct. And seriously, wtf are you smoking.. I'd love to RYO [wikipedia.org] some of that my friend!

    • Well, Harold (who later wound up with an arrow in his eye) vs Harald Hardrada was only a couple of miles up the road (from the Dunnington East of York, that is.).
  • Wow (Score:5, Funny)

    by TubeSteak (669689) on Monday February 25, 2008 @02:57PM (#22548916) Journal
    They could have gone to 3 cores, like the competition. That seems like the logical thing to do, but they said "Fuck it, we're going to six". What part of this don't you understand? If two cores is good, and four cores is better, obviously six cores would make them the best fucking CPU that ever existed.

    http://www.theonion.com/content/node/33930 [theonion.com]
    /I'm just waiting for the day Intel says "this one goes to 11"
    • Re: (Score:2, Funny)

      by downix (84795)
      Intel's coming out with 6 cores.... and?

      *pets his 8-core SPARC*
      • Re:Wow (Score:4, Interesting)

        by suso (153703) * on Monday February 25, 2008 @03:17PM (#22549186) Homepage Journal
        Am I the only one who thinks that having 3 cores, 6 cores, 3MB and 12MB is weird? Where did all the multiples of three come from in the sea of powers or 2. Did we suddenly switch to trinary or something?
        • Re:Wow (Score:4, Interesting)

          by thsths (31372) on Monday February 25, 2008 @03:31PM (#22549374)

          Am I the only one who thinks that having 3 cores, 6 cores, 3MB and 12MB is weird? Where did all the multiples of three come from in the sea of powers or 2.
          Concerning the six cores: yes, that is weird. And after making fun of AMD for selling 3 core CPUs, it is now our obligation to make fun of Intel for announcing six core CPUs. Especially since they seem to tick pretty much the same boxes as AMD anyway. (Unfortunately 6 is more than 3, so I would still want an Intel...)

          For the cache, the matter is simple. If you can fit 12 MB, but not 16, then 12 is still better than 8. You build them in 3 units of 4 MB each, so no big deal.

        • Re: (Score:3, Insightful)

          by Firehed (942385)
          Does it really matter? Just because the math to double things is easier doesn't make it a more cost-effective move. Maybe due to the shape of the chip, it's a lot cheaper to make a triple-core die than a quad. It's not like the extra core should have any weird effects - apps that support multiple procs/cores will use the extra resources, and those that don't won't. My work XP machine can only use 3GB of RAM (despite having 4GB physically in there) and there's no detriment to such a setup.

          Yes, I find it
        • The base component of this is the Core 2 Duo. That is a dual-core unit, joined by a common L2 cache. What they are then doing is putting 3 of these together, and joining them with L3 cache. Hence, 6 cores. My guess is they figure that 8 cores would be too expensive, too hot, whatever to do at this point.
        • Re:Wow (Score:4, Informative)

          by sjames (1099) on Monday February 25, 2008 @04:25PM (#22550102) Homepage

          Not sure about Intel, but in AMD's case, it was cost recovery for quad core chips where one core had a defect. They just zap that one so it doesn't show up and sell a perfectly good 3 core chip.

          • by jmv (93421)
            I think that's what Sony does with the PS3 Cell processor. Chip comes out of the fab with 8 SPUs, Sony disables 2 that possibly don't work, so they can get good yields (hence good prices) for a 6-SPU Cell.
            • by sjames (1099)

              It's a fairly common practice. The Celeron was originally the same thing, but with banks of cache. It's a logical next step after bin sorting for speed. I didn't know Sony did that w/ Cell but it makes sense.

        • by NormHome (99305)
          Three cores does seem weird however there is a valid explanation for that. I'd read that due to fab problems a percentage of quad core chips ended up with one core that didn't work but there's nothing else wrong with the chip other than that and the three cores work just fine and so to maximize production and minimize losses they're selling them as triple core cpu's.
        • by hcdejong (561314) <hobbes&xmsnet,nl> on Monday February 25, 2008 @04:45PM (#22550412)

          ...then shalt thou count to three, no more, no less. Three shall be the number thou shalt count, and the number of the counting shall be three. Four shalt thou not count, neither count thou two, excepting that thou then proceed to three. Five is right out. Once the number three, being the third number, be reached, then l...
      • Re: (Score:3, Funny)

        by xSauronx (608805)
        Next they're going to bump it up to 11, for when you need just a little more oomph to get your work done.
        • No, no, they'll have a separate knob for that - similar to the old "turbo" buttons which everyone just left on all the time. That way you can "crank it to 11" when you need it and then turn it back down when you get tired of the noise of the fans running at 11 too.
    • Re:Wow (Score:4, Interesting)

      by milsoRgen (1016505) on Monday February 25, 2008 @03:25PM (#22549286) Homepage

      They could have gone to 3 cores, like the competition.
      Which is a fantastic move, as they are simply 4-core chips with a core disabled due to manufacturing defects and what have you.
    • Re:Wow (Score:4, Insightful)

      by Tridus (79566) on Monday February 25, 2008 @03:29PM (#22549346) Homepage
      Cores are the new gigahertz. Where Intel previously raced to get the GHz up higher then AMD (no matter if it was useful or if anybody really wanted it that way), now they race to get more cores then AMD (no matter if it was useful or if anybody really wanted it that way).

      This is great for many computing environments, but my home system is not one of them. Honestly there isn't much software I use on a regular basis that really taxes the second core, let alone six of them.
      • by Firehed (942385) on Monday February 25, 2008 @03:35PM (#22549426) Homepage
        Do you only have one program ever open at a time? Not all of my software is multi-core aware by any means, but it still makes a tremendous difference when they're not all fighting over the same bit of silicon. I tend to have a dozen or so programs open at any given time at home (not to mention background processes) and while they're not all resource hogs, I like being able to let something churn away in the background without slowing down what I'm working on at the time to a crawl.
        • Re:Wow (Score:5, Informative)

          by Tridus (79566) on Monday February 25, 2008 @03:51PM (#22549662) Homepage
          Yes, I do. I don't often have something running in the background thats really active though, like a compiler. A typical setup would be something like World of Warcraft, Ventrillo, Firefox, Wireshark (watching WoW traffic is a hobby during wipe recovery), and stuff like that. The second core still isn't particularily taxed.

          In order to spike both cores, I need to start something like a compiler or video encoder, which is going to also eat I/O time. Its the I/O that slows down WoW more then the CPU usage. Since adding four more cores drastically increases my parallel processing power (which I don't need more of now), and doesn't do a thing for my I/O throughput (which I do need more of), its not really all that helpful.

          Thats why this doesn't excite me a whole lot. We were already at a spot where a single core is more then fast enough for a majority of mainstream users, and now we're going to give out six of them? Other then being able to run spyware more effeciently, whats actually being gained?

          (There are people who will benefit from this type of thing, of course. I just don't see the mainstream market as part of that group.)
          • Re: (Score:3, Informative)

            by Sebastopol (189276)
            This is entirely server related, nothing to do with gamers.

            In server land, the more cores you jam on a CPU, the fewer blades you need on the rack. The fewer blades on the rack, the greater the TPS on that rack, the more efficient the server farm.

            WoW won't use all the cores, but Yahoo!, Ebay and Google definitely will.

            • The fewer blades on the rack, the greater the TPS on that rack

              That made no sense. I meant you could put more blades on the rack, or more rack units on the rack to increase TPS. Duh.

              Originally I was headed down the power path: fewer rackmounts means fewer power supply conversions from the rack 240V/480V bus, but then my brain jumped ahead and realize the TPS density increases.

              • by Firehed (942385)
                So, you were going for "the more chips on a blade/rack unit, the more processing power you can fit in a 42U", right? The power and conversions, I should think (not exactly being a server admin) are more dependent on the efficiency of the chips or rack units that house them, not so much the number of cores. Double the cores without increasing the effenciency of the chips and you still double the power draw for your CPU overhead (which means you need a bigger, more powerful HVAC as well), it's just fitting
          • by kimvette (919543)

            Yes, I do. I don't often have something running in the background thats really active though, like a compiler. A typical setup would be something like World of Warcraft, Ventrillo, Firefox, Wireshark (watching WoW traffic is a hobby during wipe recovery), and stuff like that. The second core still isn't particularily taxed.

            Do you have a hybrid RAID chipset (such as Intel's "Matrix)?
            Is any DSP function handled by your processor for the LAN or USB interfaces?
            What about sound card? Do you have real hardware wa

          • When you are watching WoW traffic, have you ever been tempted to analyse the packets? I thought it might be useful to make a program that extracts English messages from a WoW bitstream, e.g. whispers and other chat. These are sent in plain text. But I discovered that the packets appeared to have no obvious structure that would allow chat messages to be distinguished from other data. The nearest thing to a packet header is a 32 bit word that appears every so often. Its position suggests a packet header, sinc
        • Do you only have one program ever open at a time? Not all of my software is multi-core aware by any means, but it still makes a tremendous difference when they're not all fighting over the same bit of silicon. I tend to have a dozen or so programs open at any given time at home (not to mention background processes) and while they're not all resource hogs, I like being able to let something churn away in the background without slowing down what I'm working on at the time to a crawl.

          For that, a dual core is an excellent idea, a quad core is not. A quad (or sex?) core is only useful when your workload can be divided into 4 roughly equal parts. This is true for servers, which are running dozens of threads of the same application at the same time, this may be true for some workstations running specialist applications that are sufficiently multithreaded to make use of multiple cores, and no doubt in the future there will be games that make good use of multiple cores, but for normal desk

      • This is great for many computing environments, but my home system is not one of them. Honestly there isn't much software I use on a regular basis that really taxes the second core, let alone six of them.

        Some people run windows, and they have to have a virus checker running all the time. Loads of activity every so often, which makes another core nice. And the window manager hangs sometimes and does these bizarre full-desktop refreshes every time you look at it crossways. It's good to have your program

      • Re: (Score:2, Insightful)

        by wonnage (1206966)
        The problem is that more and more its technologically infeasible to increase clock speed without frying chips built with ever tinier components. We still have the ability to cram a few more transistors onto the silicon, though. This by itself doesn't solve anything - having the ability to cram transistors on doesn't do jack if you can't make use of them. Right now, increasing the core count seems to be best way to utilize the room on the chip, which is why all the major processor manufacturers have banked o
    • by alexburke (119254)

      They could have gone to 3 cores, like the competition. That seems like the logical thing to do, but they said "Fuck it, we're going to six". What part of this don't you understand? If two cores is good, and four cores is better, obviously six cores would make them the best fucking CPU that ever existed.

      As someone who works for Sun, I feel the need to point you to our lovely . You will soil yourself. [sun.com]

      If you need even more geek pr0n, without me breaking my NDA I can point you towards Victoria Falls [sun.com]. Hardware support for 128 concurrent threads per socket with support for linking two sockets for 256 threads sharing common memory. :)

      • by alexburke (119254)
        *sigh*

        As someone who works for Sun, I feel the need to point you to our lovely UltraSPARC T2 [sun.com]. You will soil yourself.
        • As someone who works for Sun, I feel the need to point you to our lovely UltraSPARC T2. You will soil yourself.
          Well no, but I still think your post deserves mod points.
    • by howman (170527)
      Well actually, they were going to go with 5 cores but in the end decided that the sixth one would be needed to run the cooling and fire suppression systems.
    • by gringer (252588)
      Just FWIW, Gillette have already manufactured 5 blade razors.

      http://www.boingboing.net/2005/09/14/gillettes-5blade-raz.html [boingboing.net]
  • by Dice (109560) on Monday February 25, 2008 @02:57PM (#22548928)
    The Wikipedia page on QuickPath [wikipedia.org] is very lacking in the realm of details. Does anyone know how it stacks up against HyperTransport [wikipedia.org]? One of the most mouth-watering proposed uses for HT3 that I've heard of was the possibility for an external HT3 bus on a machine which could be used to link together multiple physical machines into one giant NUMA beast.

    Imagine a Beowulf of those ;)
    • Imagine a Beowulf of those ;)
      I tried, but the very thought of it very nearly took over my brain. If I hadn't begun to choke on my own drool, I might not have survived to welcome our new 6-core QuickPath Overlords.
    • by Anonymous Coward on Monday February 25, 2008 @03:23PM (#22549268)
    • One of the most mouth-watering proposed uses for HT3 that I've heard of was the possibility for an external HT3 bus on a machine which could be used to link together multiple physical machines into one giant NUMA beast.
      Horus was so mouth-watering that it may have driven Newisys out of business.
    • by springbox (853816)
      I'm surprised that Intel's front side bus has lasted this long
    • Re: (Score:3, Interesting)

      by Beliskner (566513)

      the possibility for an external HT3 bus on a machine which could be used to link together multiple physical machines into one giant NUMA beast
      That's what the Cray XT5 [cray.com] does - uses Hypertransport on new AMD Quad Core Barcelona to link multiple CPUs via their Seastar chip, and with FPGA accelerators too, sheesh
  • But... (Score:5, Funny)

    by chinkuone (1150389) <chinkuone@gmail.com> on Monday February 25, 2008 @02:59PM (#22548946)
    Still doesn't run Crysis.
  • by TeknoDragon (17295) on Monday February 25, 2008 @03:01PM (#22548984) Journal
    QuickPath: because Intel doesn't adopt standards... it rewrites them.
    • by nonsequitor (893813) on Monday February 25, 2008 @03:21PM (#22549240)

      QuickPath: because Intel doesn't adopt standards... it rewrites them.
      Why should Intel pay AMD to license HyperTransport? The specs may be open to developers, but that does not mean they are unencumbered by patents. Even if they could, why Would they?

      I don't really know the situation surrounding the technology, but even if Intel could use it for free, they would lose a huge battle in the PR War. I can see it now, "Remember that interconnect AMD has been using for years now? Well our design has finally caught up with theirs enough to use it." Remember that to the masses, the non-slashdot crowd, they have no idea what the techno-jargon spouted by Intel marketing means.

      Intel currently has the superior technology, this is because of superior fabrication capabilities, not because of a superior architecture, if I've been following this correctly over the last few years. The general public is oblivious to the fact that internally the AMD architecture is cleaner and more elegant, the only thing they have to go on is marketing. If Intel were to adopt HyperTransport, which IIRC is trademarked by AMD, that would be a huge step backwards for Intel marketing, which is just recovering now that the Core 2 architecture has put them back on top.
      • Royalty free membership must be a bad thing?

        see - http://www.hypertransport.org/consortium/index.cfm [hypertransport.org]
        • I made the disclaimer in my post I had no idea what the licensing was for HyperTransport. However, I think it would be bad PR for Intel to adopt now what AMD has been doing for years, even if it is the right thing to do technologically. I also qualified PR to mean Public Relations with the unenlightened masses, those who know nothing of Open Source Software or Open Standards.

          Intel has always been about the marketing, first it was clock speed, now its cores. Bear in mind marketing usually has very little
      • by Anonymous Coward on Monday February 25, 2008 @04:15PM (#22549962)
        Please check your facts, AMD doesn't _own_ HyperTransport, so why would Intel have to pay them anything? HyperTransport can be used royalty-free by anyone joining the HT consortium. Yes, AMD is a member of the consortium, just like a lot of other tech companies such as NVIDIA, one of AMD/ATi's biggest competitors. AMD are not the owners of the technology nor are they in control of the HT consortium. They are simply one of the most visible tech companies that has strongly embraced HT in their products.
        • The reality isn't what matters. What matters is public opinion.

          It's already bad enough that Intel's own 64bit successor, the Itanium, is widely called "Itanic" and that they ended up adopting AMD's 64bit instruction set.
          Now if once again they use the same technology as AMD instead of building their own that the marketing department will call "better", the public will start to think that Intel isn't able to come up with new ideas and relies on AMD to make revelant advances in CPU technology.
        • by Agripa (139780)
          AMD does however own the cache coherency protocol that they use over HyperTransport for processor to processor communication so Intel would either have to design their own or license AMD's implementation.
      • Re: (Score:3, Insightful)

        by mihalis (28146)

        QuickPath: because Intel doesn't adopt standards... it rewrites them.

        Why should Intel pay AMD to license HyperTransport? The specs may be open to developers, but that does not mean they are unencumbered by patents. Even if they could, why Would they? I don't really know the situation surrounding the technology, but even if Intel could use it for free, they would lose a huge battle in the PR War. I can see it now, "Remember that interconnect AMD has been using for years now? Well our design has finally caught up with theirs enough to use it." Remember that to the masses, the non-slashdot crowd, they have no idea what the techno-jargon spouted by Intel marketing means.

        Note that Intel did adopt AMD's 64-bit extensions to the x86 instruction set. I regard that as far more significant than, hypothetically, licensing HyperTransport. For example see this article on Wikipedia [wikipedia.org] or any other history of AMD64/Intel64 or "x86-64" or whatever everyone is calling it these days.

        This was a PR blow to Intel, but still made good business sense at the time, and seems to have been good for Intel and for AMD (bad for Itanium though).

        • True, but... (Score:3, Interesting)

          by Junta (36770)
          Intel did have a hell of a time confusing people before the concrete samples were available as to whether it was the same thing as AMD's 64-bit. They avoided using any term AMD associated with it for a time, instead tossing around ia32e and em64t and bs like that. I know some projects even baked into plans how to cope with yet another processor architecture for lack of a commitment from Intel that their 64-bit x86 compatible stuff would be the same.

          Intel's hand was effectively forced because they learned
      • Re: (Score:3, Informative)

        by Mex (191941)
        "The general public is oblivious to the fact that internally the AMD architecture is cleaner and more elegant, the only thing they have to go on is marketing."

        It doesn't help that in most benchmarks, AMD has been trounced by Intel this past year.

        http://www23.tomshardware.com/cpu_2007.html [tomshardware.com]

  • Welll.... (Score:5, Funny)

    by downix (84795) on Monday February 25, 2008 @03:06PM (#22549046) Homepage
    Does it go to 11?
    • by BeeBeard (999187)
      Probably 12, since we're apparently going for petty oneupmanship with the number of cores we slap on a piece of silicon these days.

      It makes we wonder why Intel just doesn't go "You know what? 100 cores, bitches. You heard us," kind of like these guys [theonion.com]. :)
      • Re: (Score:3, Interesting)

        by Firehed (942385)
        I seem to remember Intel made some proof-of-concept 80-core chip a while ago. Close enough.
  • FSB (Score:2, Interesting)

    by Anonymous Coward
    "Intel will abandon the front-side bus..."

    I think I speak for us all when I say ABOUT FSCKING TIME!
    • Re:FSB (Score:5, Interesting)

      by networkBoy (774728) on Monday February 25, 2008 @03:30PM (#22549358) Homepage Journal
      Very true!
      Now, hopefully Intel will open the new bus to third party apps (like that FPGA opteron drop-in). I'll admit I'm an Intel fanboy, but I'd buy an opteron system in a heartbeat if I could pony up the $5K for that co-processor...

      What surprises me is the current lack of complaints that you can't drop these new processors into an old board, as a new socket will be required (this is because the northbridge is rolling into the CPU IIRC). I don't see it as a big deal, because usually when upgrading the CPU one also is upgrading the memory and MB as well.
      -nB
  • This is a sever chip and the FSB may get in the way and be a big slow down with you need to go to a other socket or need to load a lot of data to the cpu all of that L3 and L2 helps as well the 24meg buffer in the chipset that needs FB-dimms but all of that pushes the cost up once intel drops the FSB the need for all of that L2 and L3 will go down as well as moveing to DDR3 that gives off less heat and needs less power.

    also there needs to be quick path / HTX slots not sockets for add on 3rd party chips on t
    • The whole point of the QPI is to not have a loaded FSB that makes scaling difficult. IDE-to-SATA went to point-point, PCI went to point-to-point in PCI-e, and now northbridge communications that have been bus oriented logically go the same way (past tense for AMD offerings). With respect to concerns about accessing non-local resources, that's what NUMA is all about and it has worked well for AMD. Essentially, the penalty isn't that bad and intelligent NUMA-aware OS process scheduling avoids the worst cas
  • I don't even own a dual core machine and now there's going to be Windows Craving 6-core machines that run Vista Ultra Quantum Home Edition!

    I feel like they would do us all a favor if they just told us the date that none of the software we'll need to run will stop operating on 'old' hardware. I can hardly wait for my HS Jr. to go off to college and they tell me I need yet another $2400 laptop as a requirement.
    • To start, technology progresses. You sound like you don't want progress because it 'forces' you to spend money by making older hardware obsolete. This older hardware comes down precipitously in price; I'm not sure when the last time you looked at laptops was, but it can't have been recently. You can buy a good laptop for 1500, and one that will handle anything but graphics and some engineering software for 800, just not be quite as snappy.

      If you're spending more than that, you're either buying a desktop rep
      • by gelfling (6534)
        No no no no no no no. This year's laptop does not 'do' 4 or 8 or 16x more or better 'work'. It doesn't. It simply does the same thing with marginally more glittery gewgaws a bit faster accounting for the corresponding growth in overhead.

        It's like having bigger open windows in your house in the summer and having to buy ever larger central air conditioning units to compensate for your new and improved larger open windows.

        I have no truck with better performance. The problem is we DON'T DO ANYTHING with it. We
        • by oatworm (969674)
          I know better than to reply to this... but I'm doing it anyways.

          We don't use one core as a dedicated security and encryption subsystem.

          Personally, I'd be a little ticked if my computer required an entire core to handle security and encryption.

          We don't effectively use our 1GB baseline RAM footprints to create an out of the box virtualization enviroment.

          Does Java, .NET, Python, or any of the other systems out there that rely on "virtual" machines count? If not, well, why would most consumers need a full-out virtual machine with separate operating systems and the like? I could see it for some sort of kiosk-mode application, which might make sense for homes with lots of kids, but I'm s

        • by Rakishi (759894)

          I have no truck with better performance. The problem is we DON'T DO ANYTHING with it.

          No, YOU don't do anything with this. I on the other hand am able to go through an order of magnitude of data in the same amount of time, run multiple tasks at once and so on. I've run windows 2k/xp for a long time on a lot of different hardware and I can tell you that the difference is massive.

          We don't use one core as a dedicated security and encryption subsystem.

          Why would you waste a whole core on this? Encryption will 99.9% of the time be doing jack shit and software can already do various types of encryption if you want it to. Why are you trying to FORCE people to do thin

          • by gelfling (6534)
            Ok I give up. Run all 6 cores to support Office, iTunes and a browser. You win.
            • by Rakishi (759894)
              In other words you're a selfish jackass who can't be bothered to read what others write and believe everyone is just like you. Sorry to break it to you but many people, unlike you, are not morons. Modern games easily use multiple cores, photoshop does as well, high res movies, video editing (a relatively popular hobby) likewise does and anyone who multitasks does better with them (within limits).

              Then again if you want to run all that on a pentium 200mhz then be my guest, unlike you I have done that (with mo
        • I don't know about not doing anything with it. My previous computer was roughly 4 years old at it's replacement, and was fairly high end when it was new. It can do everything that this new computer can do (with the exception of some games), but it can do them all at once. Do I need to close out of some office programs because I want to play a game? Exit out of everything RAM intensive when I want to burn a DVD?

          It's also considerably faster at numerous tasks, particularly DVD burning (my old computer would t

Someone is unenthusiastic about your work.

Working...