Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Upgrades Hardware

Intel Quietly Introduces 3.8GHz P4 207

BatonRogue writes "I didn't see this anywhere else, but it looks like Intel has quietly launched their Pentium 4 570J running at 3.8GHz. The J denotes Intel's Execute Disable Bit support, which they have also quietly introduced (it seems to save face of being 2nd to support it behind AMD). AnandTech seems to be the only place to have a review of the 570J. It performs reasonably well and even better than AMD in some areas, while falling behind in things like games. AnandTech has a nice one page benchmark comparison of the 570J to AMD's 4000+ as a quick reference."
This discussion has been archived. No new comments can be posted.

Intel Quietly Introduces 3.8GHz P4

Comments Filter:
  • by boringgit ( 721801 ) on Sunday November 14, 2004 @02:05PM (#10813919) Homepage
    I can't help but be amused at the way Intel have had to "sneak" the fastest model of their Flagship processor out of the door.

    Does anybody remember a few years ago, the Athlon was outperforming anything Intel had to offer, yet they still claimed it was only competing with the Celeron.
    • Don't mention Celeron. I don't know why Intel keep on releasing it. They give low-budget a new low. In today's market I just don't understand the need to have a low-end Celeron line.

      I just don't believe Customers can't wait 2 weeks before the price of a Pentium-4 drop, and they MUST have a higher-Ghz-count Celeron today. What's even worse are the laptop Celerons, which perform like 486 chips relabeled.

      • If Intel didn't have the Celeron, there would be nothing to compete with the Athlon XP/Duron in the low end. Of course, the Celeron really doesn't compare to the Athlon XP, but if there was no Celeron, OEMs like Dell would be forced to use AMD chips in their low end machines because the P4's would simply be too expensive.

        With news that Dell is starting to use AMD chips in their servers, this could change. If Dell moved to using AMD in their low end systems, the Celeron would be about finished.
      • Don't mention Celeron. I don't know why Intel keep on releasing it ... In today's market I just don't understand the need to have a low-end Celeron line.

        They keep releasing Celerons because there is a large market for brand-new $400-$500 computers. Dell and HP can't build them without sub-$100 processors and matching low-end chipsets.

        They give low-budget a new low.

        According to another Anandtech article [anandtech.com], today's Prescott-based Celerons (Celeron D) give surprisingly good performance for "low-budget"

    • I used Intel exclusively from the early 90's until 2002. Since then, all new machine purchases for both my home and business are AMD. I insist on AMD for any new machine. I am still shocked to find AMD's chips being consistently priced less than Intels (I have no idea how they do it).

      I have nothing personal against Intel, as they did much for the PC industry and served me well for a long time. They still make excellent motherboard chipsets as well. I have come to realize, however, that AMD consistentl

      • I am still shocked to find AMD's chips being consistently priced less than Intels (I have no idea how they do it).

        And later:
        I've seen very few AMD commercials over the years, but I can't get those dancing Pentium 4 "Blue Men" out of my head.

        Maybe there's a relation?
  • Weird (Score:5, Insightful)

    by FiReaNGeL ( 312636 ) <.moc.liamtoh. .ta. .l3gnaerif.> on Sunday November 14, 2004 @02:06PM (#10813921) Homepage
    Can someone justify that they compared Intel's 3.8 Ghz to AMD 4000+ (4 Ghz equivalent, theorically)? Maybe they wanted to compare both company highest speed CPU... anyway, the only positive side I see in these high speed CPU is that they'll drive prices of their (somewhat) slower counterpart down... the AMD 3500+ is already at a very interesting price/performance ratio, it can only get better... and HL2 is only days away!
    • Re:Weird (Score:2, Informative)

      by Anonymous Coward
      I won't try justifying it (though I believe it to be your first guess), but the relationship between GHz and PR is (somewhat) meaningless anyway.

      4000+ dosen't mean "roughly equivlant to a P4 @ 4.0GHz", but instead "roughly equivlant to a Thunderbird @ 4.0GHz", so the comparison even between a 3.8 and a 3800+ could still be construed as not being fair (for one side or the other).
      • Re:Weird (Score:3, Insightful)

        by Jeff DeMaagd ( 2015 )
        4000+ dosen't mean "roughly equivlant to a P4 @ 4.0GHz", but instead "roughly equivlant to a Thunderbird @ 4.0GHz",

        I think AMD tries to claim it, but I'm not convinced it is true. I went to a lecture given by an AMD engineer, and he said the processor rating really was based on the equivalent speed Intel product. A problem here is that the vastly different architectures and computer topologies make the different brand CPUs better at different things, butit is an average based on a range of benchmarks.
    • Re:Weird (Score:3, Insightful)

      by ssimontis ( 739660 )
      If they wanted to compare the top processors from each company, why didn't they test the new P4 against an AMD 64 FX system?
    • They were comparing processors that were priced the same. This is the high end price point. They have links to the low and mid range processor price points also.
  • I can guess why... (Score:5, Insightful)

    by Avoid_F8 ( 614044 ) on Sunday November 14, 2004 @02:06PM (#10813922)
    while falling behind in things like games.

    Perhaps that's why it was quietly introduced? Gaming is really the only reason for a CPU upgrade these days. Knowing that AMD would achieve another victory in that area, why would they spend money promoting yet another little bump to the P4's clock speed? My guess is that they're waiting for the real kicker; this is just something to keep their heads above the water until it's ready.
    • by ScrewMaster ( 602015 ) on Sunday November 14, 2004 @02:21PM (#10813991)
      Yeah, gaming and high-end CAD. Seriously, the truth is that AMD and Intel could have milked the performance market for another ten years (much like Microsoft is still milking the desktop GUI market) but now, even commodity PCs are so fast that the mass market isn't feeling the slightest pressure to upgrade. At least, they aren't upgrading their CPUs. Printers, cameras, MP3 players, sound cards, WiFi ... sure. But for the vast majority of applications the current crop of CPUs is just total overkill.
      • by Canth7 ( 520476 ) * on Sunday November 14, 2004 @02:55PM (#10814146)
        Just wait til Longhorn comes out. 2GB of RAM and 4Ghz so you can turn on all the eyecandy. The biggest reason to make your OS prettier (and more bloated and resource intensive) is because you can. Imagine trying to run Windows 98 with all the visual effects on a 486. Windows, KDE, OSX, etc, have increased the visual effect requirements slowly over the years. Sure you can run your XP desktop without a background or window animations or cleartype fonts, but it doesn't come out of the box like that. If you have a faster CPU, your OS/applications will use it...eventually.
        • Sure. But there's a limit to just how far you can go with that, and I think we're fast approaching that limit now. The problem is that even Microsoft, the reigning champion of creeping (some would say, creepy) features and bloated software, hasn't managed to bog down even a 1.4 gig Athlon to the point where a cost-conscious user would feel compelled to upgrade. Hell, I have a 500 Mhz. Tecra 8100 laptop with Windows and Office XP, and it's more than snappy enough for what I do with it. Granted I put the f
          • growth curve is pretty FLAT

            since when is a 12% quarterly growth in revenues (to $9.2 billion USD) considered flat?

            • Well, look at this: [cnn.com] It's all relative to past performance.

              Microsoft is suffering the fate of all successful monopolies: they have to keep performing in order to keep their investors happy, but that's tough when they've already expanded their customer base about as far as they can. Microsoft's ongoing attempts to break new ground aren't just for the fun of it, they have to find new markets or find their own bubble bursting.

              Recent changes in upgrade policies (attempted changes, anyway ... big co
          • If there's a limit to random system bloat, it's a long ways beyond where we are now.

            Mac OS X needs to use the 3d card to get a smooth desktop display. There's a lot more they could be doing eye-candy wise that they're not.

            With Linux you can set a screensaver as your desktop wallpaper. With Windows, your desktop wallpaper can be a webpage with java applets in it.

            When we have 8 or 9 gigs of RAM in our machines, that sort of thing could be standard.
        • Just wait til Longhorn comes out. 2GB of RAM and 4Ghz so you can turn on all the eyecandy.

          Don't be silly - that's not the required specs, that's the expected average specs by the time Longhorn is released. Or do you really think that they can be adding enough features to require 8-16 times the RAM that XP requires? (XP will run in 128MB, you'll be wanting 256MB if you install a virus checker)

      • Yeah, gaming and high-end CAD.

        I'd like to add applications that are almost infinitely scalable. For example, anything that you tell your computer to do and you walk away for an hour. The first thing that comes to mind is trans-coding DVDs. Mabey with a 3.8 it will take 4 hours instead of 7 with my 1800+ AMD.

        Still not buying one, but there are reasons.

        ~Will
    • by grmoc ( 57943 ) on Sunday November 14, 2004 @02:30PM (#10814031)
      Disagreement.

      There is always a need for more processing power.
      Computer vision, speach recognition (semantic processing is a b*tch), etc are all still well beyond current computers' computational capabilities.

      If you're just thinking about computers as being for 'work==Word Processing/Spreadsheet Editing', and 'play==computer games', them you need to look a little further.

      More CPU power is always welcome. We shift what 'ordinary' means as computational power increases. Think of the day when you just speak to your computer and it speaks back.. Science fiction, well still yes, but it is verrrry likely that increased computational capability is the catalyst for such a thing.
      • The tasks you name are probably better assigned to non-conventional computing devices.

        For many technology improvements, there needs to be an application for it that justifies buying the advancement in order to sell it beyond the few early adopters, life cycle replacers (businesses), those whose computers had finally broken down and are not worth repairing. Even these people don't necessarily get the latest unless it is for research, design, media creation, etc, or the gamers, it seems the ones that have t
  • by Underholdning ( 758194 ) on Sunday November 14, 2004 @02:07PM (#10813930) Homepage Journal
    I once attended a lecture by one of the designers from AMD. He said, that the clock speed of the processor was a key selling point. In reality, all the development that went into making processors operate at a higher clock cycle could be spent in much better ways, making better and more efficient processors. But - alas - efficiency doesn't sell. High numbers on a package does.
    Anyway, does any of you actually have a specific need for high frequency processors? Most of the projects I've been working on always had other bottle necks, preventing me from utilizing the CPU completly.
    • by grmoc ( 57943 ) on Sunday November 14, 2004 @02:33PM (#10814043)
      Absolutely. Try processing 1920x1280 sized frames of video at 30 frames per second. Even if the bandwidth is there (and it is, just barely), the CPU doesn't keep up.

      Computer vision (and other computational perception/AI fields) eat up CPU like nobody's business... ... And while you may immediately think its research, it is entirely possible that people in the broadcast industry attempt to do this kind of thing on a daily basis ...
      • I think this is why the 1080p HDTV standard isn't implimented anywhere. It just requires too much bandwidth to generate/distribute/display video at 1920x1080 @ 60fps progressive scan.

        Rumor has it that Texas Instraments' next generation DLP processor will actually support the 1080p format. But I'm not holding my breath on.

        -JungleBoy
        • TI supposedly had a 1080p chip a bit ago, they just didn't think there was a market for it.

          There are a few LCD and LCOS projection displays that are available in 1080p and higher resolutions, like 2k x 1.5k. Apple's desktop displays are into this range, the same goes for some other products.

          I'm not sure exactly what the frame rate is, but 1080p video needs only to be 24 or 30fps, not 60.
      • Why the odd aspect ratio? that is 1.5, not 16:9. the largest resolution HDTV is 1920 x 1080.

        Any Radeon can decode MPEG-2 in 1080i without trouble. nVidia chips aren't enabled for this, although they do have the computational power.

        Even Microsoft's WMV9 only needs 3GHz equivalent CPU to play 1080p movies.
        • Its not an odd aspect-ratio I just mistyped.

          So... FYI (and correcting myself)
          1080i == 1920x1080x30 FPS (interlaced),
          1080p == 1920x1080x60 FPS (progressive)
          720p == 1280x 720x60 FPS (progressive)
          486i == 720x486 x30 FPS (interlaced) (i.e. ntsc)

          So, bandwidth comparison(s):
          1080i == 5.9*486i
          1080p == 11.8*486i

          In other words, a 1080i stream is about the same bandwidth as six 'regular' (i.e. SD) streams, and 720p is about the same as 12 SD streams.

          Thats a lot of pixel pushing/processing.
    • by evilviper ( 135110 ) on Sunday November 14, 2004 @03:51PM (#10814504) Journal
      I once attended a lecture by one of the designers from AMD. He said, that the clock speed of the processor was a key selling point.

      This must have been quite a while ago, before AMD's XP "quantispeed" numbering got everyone to forget about the MHz. Now you look for a 3200+, not a 2GHz processor.

      Processor makers (namely, Intel) have been the ones who have pushed the MHz myth upon the public. Now that they aren't able to continue it without being far hotter (and they notice a good number of sales are being lost because of that) they are backpeddling, and giving up the MHz race.

      Most of the projects I've been working on always had other bottle necks, preventing me from utilizing the CPU completly.

      While I/O bandwidth, the interrupt model, and many other crufty pieces of the PC architecture have become a bottleneck, there are still many CPU-bound applications.

      I'm doing a huge ammount of video compression (TV capture, conversion to MPEG4) and even though I'm using very the very fast mplayer/ffmpeg for compression, CPU time is the bottleneck, and it would be much more convienient for me if I could do it faster. I'm sure I'm not unique, as many people are doing MPEG-2 encoding now, to master/covert/copy DVDs.

      Encryption is a big CPU-drain as well. Anything I'm doing over the network, tends to need encryption. Remote log-ins, file copy, etc. This is a real CPU-hog. While it only costs about $100 to get a basic PCI crypto card, most people don't spend the money, and leave their CPU to do all the work. Even if you buy the hardware, it limits you to only one or two methods, which forces your CPU to handle any other cases. And even if you can do hardware crypto all around, you'll probably also want to compress the data, which will load down your CPU pretty good.

      Compression is one way people work-around the other computer bottlenecks. If your storage or network connection isn't as fast as you'd like, you can use compression to speed the process up, which taxes the CPU. Compression speeds up my own network backups by about an order of magnitude.

      Personally, I'm willing to stay back from the cutting edge, as a hundred MHz here and there isn't worth the premium. I'm also concerned with the heat output, and the power draw, and doing what I can to reduce those. However, I certainly do need number-crunching CPU power in some of my machines.
    • I do high energy physics modeling. I currently have over 30GhZ of computers at my disposal and it still takes a week for me to get results.
      • The problem that I see with that statement is that you immediately equate clock frequency with computational speed. Okay - just because say... I give you a D Flip-Flop that can run at 10Ghz (how uber!), does not mean it has more computational power (rofl) than a D Flip-Flop that maxes out at 10 Mhz. CPU design is important. If you're going to measure CPU performance by the Hz, might as well measure it by the core voltage, chip size, weight, attractiveness of packaging or computrons.
    • Integrated circuit design needs lots of processing power for simulation and autorouting.
  • by Anonymous Coward on Sunday November 14, 2004 @02:09PM (#10813941)
    Intel's plans for a quiet introduction goes down the drain.
  • by Indy1 ( 99447 ) on Sunday November 14, 2004 @02:11PM (#10813953)
    for a grad student at work (i work IT for the engineering college) and the grad student insisted on intel. I warned him that intels run hotter and louder (because they need more cooling) but he said intel anyways. Well once i delivered the machine to him, the first thing he said was "wow that thing is loud". I used a boxed intel cpu (which comes with the heatsink and fan) and when you put it under load, you can hear it clear across the room. Intel's heat problem is just ridiciously, and i am afraid to even hear what a 3.8 ghz would sound like when you ran it full steam.
  • by Brian Stretch ( 5304 ) * on Sunday November 14, 2004 @02:12PM (#10813957)
    Look at the power consumption [anandtech.com] difference between this new P4 and the Athlon 64. It's big enough between the 90nm P4's and 130nm A64's, but a 90nm P4 system uses nearly twice the juice of a 90nm A64. Mind you, that's the difference between entire systems, so the consumption difference between just the CPUs is even more extreme.

    Imagine a Beowulf cluster of these...
    • So that's what Intel means by "Extreme Edition". Extreme power consumption, exteme temperatures, extreme cooling solutions..
    • Mind you, that's the difference between entire systems, so the consumption difference between just the CPUs is even more extreme.

      That's not necessarily true. I switched motherboards in one of my systems, keeping the same AMD 2000+ processor, and switching the motherboard alone, added about 30watts to my total power consumption. Sometimes the motherboard chipset makes a huge difference.

      The new power-sapping motherboard in question is an Asus A7V600-X. I exchanged it twice, assuming a problem, only to fi

  • Oh, wait. (Score:5, Funny)

    by Anonymous Coward on Sunday November 14, 2004 @02:12PM (#10813958)
    For a moment there I read "Executive Disable" bit. I'd have bought that gadget in a minute!
  • Much needed (Score:5, Funny)

    by wombatmobile ( 623057 ) on Sunday November 14, 2004 @02:16PM (#10813966)
    Cool. This should make my Word 97 fly.
  • Mmmmmm... (Score:5, Funny)

    by dethl ( 626353 ) on Sunday November 14, 2004 @02:16PM (#10813967)
    A 3.8ghz P4 chip out in time for people who need an extra computer and an extra space heater.
    • I already have a space heater in my room.. It only runs at 1.2 Ghz, but it does come with a bunch of other devices running at 5400 rpm and 7200 rpm, which make up the bulk of the heat.

      You want the room hotter, access more drives!.. Cooler? Ok, don't touch as many drives!

  • by Anonymous Coward on Sunday November 14, 2004 @02:27PM (#10814017)
    In fact, the only thing anyone noticed was the rise in ambient air temperature.
  • by fuck_this_shit ( 727749 ) on Sunday November 14, 2004 @02:35PM (#10814052)
    I blame Intel for global warming
  • by mOoZik ( 698544 ) on Sunday November 14, 2004 @02:39PM (#10814071) Homepage
    I find that my 2.0 Ghz can hardly heat the room as quickly as I'd like it to. Maybe if I get the new Intel 3.6 Ghz, I could also have the added benefit of toasting marshmallows on it.

    • I was at a LAN party over the last weekend, and actually can't concur with that. We were out in a small house in the landside, outside temperatures just above the freezing point, without any heating. Naturally there were mostly high-end computers, but until there were more than about five of us there, the room didn't really get uncomfortably hot. ;)
  • I would like to see more benchmarking of software compiled and optimized for each processor. While it is useful to compare how CPUs execute identical code, that doesn't tell the whole story.

    The main problem is that precompiled binaries may have been optimized for one processor or another, introducing bias into the study. I'm not saying we should get rid of this kind of benchmarking, but to see the big picture, we also need results from programs compiled from source and optimized for each processor.

    • by Slack3r78 ( 596506 ) on Sunday November 14, 2004 @03:01PM (#10814199) Homepage
      If this were a Linux comparison, I'd probably agree with you. But as it stands, outside of the Mozilla test, I saw almost entirely commercial Windows software, which you don't have the option of compiling yourself.

      While a Linux comparison might give you a better idea of the raw capability of each processor, keep in mind that Windows has a 90% marketshare, and as such, the way Anandtech tests is closer to "real world" performance for most people.
      • The thing that bothers me about benchmarking software is this: At some point, someone compiled that software. In most cases, you could track down who produced which benchmarks, but identifying the machine the final product was compiled on, the architecture used, compile flags, and so forth is a different matter. So you have benchmarks, but you have no verifiable means of determining if they're biased towards any particular processor/architecture.
      • All I'm saying is that they could include at least one compiled program in the benchmarking, such as oggenc or lame to demonstrate the "raw capability of each processor." Of course, different types of programs will naturally be stronger on different platforms, (I.E. gaming, audio encoding, video encoding, etc) so this is not a silver bullet. It would, however, reduce the problem of benchmarking processors with code optimized for another architecture.
  • by GrouchoMarx ( 153170 ) on Sunday November 14, 2004 @03:05PM (#10814221) Homepage
    For those who don't know what this is (I didn't), Intel's writeup on it is here [intel.com]. It doesn't look completely evil, but then it is their own marketing docs. Anandtech [anandtech.com]'s writeup is similarly positive, more or less.
    • It doesn't look completely evil, but then it is their own marketing docs.

      Of course it's not completely evil. Execute disable bit is only kinda evil. Completely evil only comes with evil bit [slashdot.org] support, which was introduced last year. Intel and its partners should be completely evil compliant by Q3 2005.
  • It's interesting to note that the idle power consumption is actually lower that a 3.0 Ghz P4 530. Could this be an indication that Intel are trying to rectify the problems with the 90nm process?
  • by freelunch ( 258011 ) on Sunday November 14, 2004 @04:15PM (#10814669)
    The benchmark referenced in this article gives Intel a big break by not comparing the Athlon 64 in native 64 bit mode. The few articles that do typically don't come right out and show the graphs side by side with Intel. 64 bit support makes a big difference in an increasing number of applications.

    Another important fact - a socket 939 based motherboard purchased today should accept a dual core Athlon 64 in about a year. The dual channel memory controller in the 939 version means there will be plenty of memory bandwidth for that upgrade.

    Encoding and transcoding video and audio are two great examples of CPU intensive work that aren't "games".

    I run natively compiled Gentoo on my Athlon 64 system.
    • Yeah, but in a year I'd want a new motherboard anyway. I always upgrade processor and motherboard at the same time (and often RAM as well, though I don't plan to this time as DDR400 still will work with the inexpensive A64 3400+ boards). Then I can pass the mobo/processor(sometimes RAM) on to family or another computer for me to play with...
  • by HungWeiLo ( 250320 ) on Sunday November 14, 2004 @04:26PM (#10814766)
    ...successfully introduces the first integrated I/O chipset which can sync up all critical peripherals to be on the same bus speed. Video cards and CPUs far exceed any processing capacities provided by memory or storage components. While there still may not be the "killer app" to justify all that extra power, it will allow the respective company to temporarily get a hearty headstart in the dick-waving contest.
    • Hello, I'm replying to my own post here.

      My parent post was actually part of a research I'm doing in school for a technical communications class. I was gathering information on misinformation in technical advertising in the marketplace by purposely putting non-truths into my post. As several readers have pointed out already, the parent post was garbled jibberish. The intent of this is to illustrate how advertising with the appropriate buzzwords can generate positive word-of-mouth from the general public (ju
  • NX Bit (Score:4, Informative)

    by RAMMS+EIN ( 578166 ) on Sunday November 14, 2004 @04:48PM (#10814902) Homepage Journal
    ``Intel's Execute Disable Bit support, which they have also quietly introduced (it seems to save face of being 2nd to support it behind AMD)''

    IIRC, VIA and Transmeta already support this. And, of course, all Real CPUs have supported it for years.
  • by Tufriast ( 824996 )
    That Intel has already announced that it is stepping away from the "Bigger Clock Speeds mean Better Processors" theory. The release of a higher clock speed processor seems to fly right in the face of that announcement. This is probably one of the last releases in that department they are going to make. I know that a 4 GHz will not be manufactured by Intel. As a matter of fact, you can search for the article here on Slashdot to find that they are lowering clockspeeds, and going for more efficient CPU's.
  • NX/EDB (Score:4, Interesting)

    by Doc Ruby ( 173196 ) on Sunday November 14, 2004 @05:59PM (#10815441) Homepage Journal
    NX/EDB should be the default mode for memory accessed by logic: unexecutable data. Computer science, engineering and other programming has shown that practically all memory is used for either data or instructions; only rarely do "metaprogramming" patterns call for processing the instructions as data. However, all memory space is typically treated equally, though some memory protection is instituted in VMs, like separate address spaces per process. A much better memory model for CPUs is an execution mask, which privileged processes can update to allocate instruction space for started child processes. Modern OS'es not only use VM (virtual memory) and MMU APIs, they usually have hardware support (MMU chips) for managing memory. Mapping the MMU index to a dedicated fraction of main memory (eg. 1b:KB = 1MB:8GB, or even a scaling factor configured dynamically) would let instruction vectors execute very quickly, probably adding negligible overhead to instruction execution as each memory access passes through an extra "NAND". Extra CPU/MMU cache dedicated to the execution mask is better spent on such a qualitatively beneficial feature than on just extra KB of instruction to hit. And the benefits in uptime alone make the performance proposition a win, running marathons compared to lots of sprints ending in halts and restarts. That reliability bubbles up in efficiency throughout the cycle, from running programs, to developing them, debugging them, maintaining them, managing them, and buying them - the human teams become much more efficient when the tools are always sharp with steady handles. And chip vendors would have another feature on which to compete, rather than just the pernicious price and MHz games. Intel, are you listening?
  • Can someone explain why having this is such a good idea? I can imagine that putting executable code in by using a buffer overflow is a bad thing. But you would still be able to change the parameters of the program and/or destroy data from the program. So though it might prevent some worms from spreading, I don't see the big difference. With a bit of engeneering, you might alter the application enough to get to the same results anyway.

    What the question boild down to is: how much more secure would this make
    • by caveman ( 7893 ) on Sunday November 14, 2004 @06:16PM (#10815582)
      Not a lot.

      For NoExecute to work properly, code sections need to be read-only. See notes in my previous comment [slashdot.org]. Merely marking data no-execute doesn't prevent valid instructions from being overwritten unless they are protected, and that protection is also protected. (I.e. it's no good having code sections which are marked no-write, if the latest IE bug-du-jour can merely change the permissions from user mode. It has to be a kernel mode operation).
  • It performs reasonably well and even better than AMD in some areas, while falling behind in things like games.

    What are these "some areas," you speak of? Surely you're not implying that a CPU is useful for things other than gaming?

Adding features does not necessarily increase functionality -- it just makes the manuals thicker.

Working...