Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Next-Gen CPU Has Memory Controller and GPU 307

Many readers wrote in with news of Intel's revelations yesterday about its upcoming Penryn and Nehalem cores. Information has been trickling out about Penryn, but the big news concerns Nehalem — the "tock" to Penryn's "tick." Nehalem will be a scalable architecture with some products having on-board memory controller, "on-package" GPU, and up to 16 threads per chip. From Ars Technica's coverage: "...Intel's Pat Gelsinger also made a number of high-level disclosures about the successor to Penryn, the 45nm Nehalem core. Unlike Penryn, which is a shrink/derivative of Core 2 Duo (Merom), Nehalem is architected from the ground up for 45nm. This is a major new design, and Gelsinger revealed some truly tantalizing details about it. Nehalem has its roots in the four-issue Core 2 Duo architecture, but the direction that it will take Intel is apparent in Gelsinger's insistence that, 'we view Nehalem as the first true dynamically scalable microarchitecture.' What Gelsinger means by this is that Nehalem is not only designed to take Intel up to eight cores on a single die, but those cores are meant to be mixed and matched with varied amounts of cache and different features in order to produce processors that are tailored to specific market segments." More details, including Intel's slideware, appear at PC Perspectives and HotHardware.
This discussion has been archived. No new comments can be posted.

Intel Next-Gen CPU Has Memory Controller and GPU

Comments Filter:
  • Is AMD beaten? (Score:4, Interesting)

    by Anonymous Coward on Thursday March 29, 2007 @08:17AM (#18527167)
    It seems that AMD has lost, and I'm not trying to troll. It just seems that fortunes have truly reversed and that AMD is being beaten by 5 steps everywhere by AMD. Anybody have an opposing viewpoint? (Being an AMD fan, I am depressed.)
    • by Fordiman ( 689627 ) <fordiman AT gmail DOT com> on Thursday March 29, 2007 @08:19AM (#18527191) Homepage Journal
      I do. I feel that AMD should stop beating itself and get back to beating Intel!

      No, seriously, though. I'm holding out on the hope that AMD's licensing of ZRAM will be able to keep them in the game.
    • Re:Is AMD beaten? (Score:4, Insightful)

      by Applekid ( 993327 ) on Thursday March 29, 2007 @08:25AM (#18527245)
      "Anybody have an opposing viewpoint?"

      I think "AMD fan" or "Intel fan" is a bad attitude. When technology does its thing (progress), it's a good thing, regardless of who spearheaded it.

      That said, if AMD becomes so obviously a bad choice, Intel who is in the lead will continue to push the envelope just not as fast since they don't have anything to catch up to. That will give AMD the opportunity to blow ahead as it did time and time again in the past.

      The pendulum swings both ways. The only constant is that competition brings out the best and it's definitely good for us, the consumer.

      I'm a "Competition fan."
      • Re: (Score:3, Interesting)

        That will give AMD the opportunity to blow ahead as it did time and time again in the past.

        That's assuming they'll have the cash and/or debt availability to do so; a large chunk went into the ATI acquisition. Their balance sheet reads worse now than any time in the past (imho) and the safety net of a private equity buyout is weak at best. Now that ATI is in the mix, it seems that competition in two segments is now at risk.

        Point being that the underdog in a two horse race is always skating on thin ice. Le
      • Re: (Score:3, Insightful)

        by Anonymous Coward
        Meh.

        #define Competition > 2

        What you have here is a duopoly, which is apparently what we in the US prefer as all our major industries eventually devolve into 2-3 huge companies controlling an entire market. That ain't competition, and it ain't good for all of us.

        Captcha = hourly. Why, yes, yes I am.
      • The pendulum swings both ways. - OMG. I read that as 'The penguin swings both ways'. Then I realized that it is possibly the case. I am now in deep catatonia within myself as the images of swinging penguins (literally and figuratively) have flooded my mind.
      • Re: (Score:3, Insightful)

        by bberens ( 965711 )
        10 threads, 8 cores, I don't give a damn. The standard baseline PC workstation bought from [insert giganto manufacturer] really doesn't provide me with a better experience than it did 4 years ago. Memory bus, hard drive seek time, etc. are the stats I care about and are going to give me the most noticable improvement in usability. CPU cores/threads/mhz is pointless, the bottleneck is elsewhere.
    • Re:Is AMD beaten? (Score:5, Interesting)

      by LWATCDR ( 28044 ) on Thursday March 29, 2007 @08:33AM (#18527327) Homepage Journal
      Simple Nothing has shipped yet.
      So we will see. Intel's GPUs are fine for home use but not in the same category as ATI or NVidia. The company that might really loose big in all this is NVidia. If Intel and AMD start integrating good GPU cores on the same die as the CPU where will that leave NVidia?
      It could be left in the dust.
      • Re: (Score:2, Interesting)

        by eddy ( 18759 )
        Just you wait for the Ray Tracing Wars of 2011. Then the shit will really hit the fan for the graphics board companies.
        • Re: (Score:3, Informative)

          by Creepy ( 93888 )
          Ray Tracing is not the be-all end-all of computer graphics, you know - it does specular lighting well (particularly point source) but diffuse lighting poorly, which is why most ray tracers also tack in a radiosity or radiosity-like feature (patch lighting). The latest polygon shaders often do pseudo-Ray Tracing on textures, so we're actually seeing ray traced effects in newer games (basically ray tracing approximation on a normal mapped surface). You can, say, take a single flat polygon and map a face ont
      • Hopefully to license GPU technology to Intel as an outside think tank, while making motherboard chipsets and high-speed PCI Express add-in cards for things that still aren't integrated onto the CPU. They have experience in making some pretty nice chipsets, after all, and more experience than most in making high-performance PCI Express peripherals.

        PCI Express is 2.5Gbps per lane each way, so x16 means 40Gbps full duplex. I haven't seen any x32 anywhere, but there's supposed to be specs for it. That's 80Gbps
        • by LWATCDR ( 28044 )
          I so hope that physics engines don't go mainstream. I fear what eye-candy might end up on my GUI. The wobble windows are bad enough.
          • by mikael ( 484 ) on Thursday March 29, 2007 @10:35AM (#18528983)
            Bah,
                You don't need an advanced GUI and expensive GPU to to do wobble effects. Every time the guy in the next cubicle degaussed his computer monitor, *EVERY* window on my desktop would wobble, even the taskbar. To avoid any damage to my monitor, I'd degauss my monitor :)

      • Re: (Score:3, Insightful)

        by Endo13 ( 1000782 )

        If Intel and AMD start integrating good GPU cores on the same die as the CPU where will that leave NVidia? It could be left in the dust.
        It might not affect NVidia at all. At worst, it will replace their on-board graphics chipsets. These are a replacement for integrated graphics that are part of the chipset. It's going to be quite some time (if ever) until GPUs integrated in a CPU will be powerful enough to replace add-on graphics cards.
      • Re: (Score:3, Insightful)

        by donglekey ( 124433 )
        They will all be playing the same game eventually, and that game is stream processing. Generalized stream processing using 100's of cores doing graphics, video, physics, and probably other applications. It is already happening, although Nvidia is a pretty undisputed champion at the moment. AMD owns ATI, Intel is working on their 80 core stream processing procs, IBM has the Cell, and Nvidia has their cards (128 'shader' units on the 8800 GTX). It is all converging very quickly into the next important aspe
    • Re:Is AMD beaten? (Score:4, Interesting)

      by Gr8Apes ( 679165 ) on Thursday March 29, 2007 @08:36AM (#18527355)

      It seems that AMD has lost, and I'm not trying to troll. It just seems that fortunes have truly reversed and that AMD is being beaten by 5 steps everywhere by AMD. Anybody have an opposing viewpoint? (Being an AMD fan, I am depressed.)
      Oh, good lord. Intel announces the "new" technology for something that's not due for years (most likely 2) which happens, just happens, to be tech you can already buy from AMD today (or with their next CPU release in the next few months) and you're running around "the sky is falling, the sky is falling".

      This reminds me of MS during the OS/2 days, when they first announced Cairo with its DB file system and OO interface (sound familiar? It should - features of Longhorn, then moved to Blackcomb, and now off the map as a major release). Unlike MS, I don't doubt Intel will finally release most of what they've announced, but to think that they're "ahead" is ludicrous. At this moment, their new architecture will barely beat AMD's 3+ year old architecture (See Anandtech or Tom's, I forget which, but there was a head to head comparison of AMD's 4X4 platform with Intel's latest and greatest quad CPU, and AMD's platform kept pace. That should scare the bejeebers out of Intel, and apparently it has, because they're now following the architectural trail blazed by AMD, or announced previously, like multi-core chips with specialty cores.

      In other words, not much to see here, wake me when the chips come out. Until Barcelona ships, Intel holds the 1-2 CPU crown. When it ships, we'll finally be able to compare CPUs. AMD still holds the 4-way and up market, hence its stranglehold in the enterprise. Intel's announcement of an onboard memory controller in Nehalem indicates that they're finally maybe going to try to tackle the multi-CPU market again, depending upon how well architected that solution is.
      • Re:Is AMD beaten? (Score:5, Informative)

        by jonesy16 ( 595988 ) on Thursday March 29, 2007 @09:33AM (#18528067)
        I'm not sure what reviews you've been looking at but AMD is not nearly "keeping pace" with Intel, not for the last year anyway. http://www.anandtech.com/cpuchipsets/showdoc.aspx? i=2879 [anandtech.com] clearly shows the intel architecture shining, with many benchmarks having the slowest Intel core beating the fastest AMD. At the same time, Intel is acheiving twice the performance per watt, and these are cores, some of which have been on the market for 6-12 months. Intel has also already released their dual-chip, eight core server line which is slated to make its way into a Mac Pro within 3 weeks. AMD's "hold" on the 4-way market exists because of the conditions 2 years ago when those servers were built. If you want a true comparison (as you claim to be striving for) then you need to look at what new servers are being sold and what the sales numbers are like (I don't have that information). But since the 8-core Intel is again using less than half of the thermal power an 8-core AMD offering, I would wager that an informed IT department wouldn't be choosing the Opteron route.

        AMD is capable of great things but Intel has set their minds on dominating the processor world for at least the next 5 years and it will take nothing short of a major evolutionary step from AMD to bring things back into equilibrium. Whilst AMD struggles to get their full line onto the 65nm production scheme, Intel has already started ramping up the 45nm, and that's something that AMD won't quickly be able to compete with.

        Intel's latest announcement of modular chip designs and further chipset integration are interesting but I'll reserve judgement until some engineering samples have been evaluated. I'm not ready to say that an on-board memory controllers is hands-down the best solution, but I do agree that this is a great step towards mobile hardware (think smart phones / pda's / tablets ) using less energy and having more processing power while fititng in a smaller form factor.
        • Re: (Score:2, Interesting)

          by Gr8Apes ( 679165 )
          I think you missed the point. The AMD 4X4 solution kept pace with Intel's best under the types of loads where multiple cores are actually loaded. From your link:

          When only running one or two CPU intensive threads, Quad FX ends up being slower than an identically clocked dual core system, and when running more threads it's no faster than Intel's Core 2 Extreme QX6700. But it's more expensive than the alternatives and consumes as much power as both, combined.

          My point was that 3 year old tech could keep pace with Intel's newest. The 4X4 system is effectively nothing more than a 2-way Opteron system. With an identical number of cores, AMD keeps pace with Intel's top of the line quad. That would concern me if I were Intel, especially with AMD coming out with a quad on a smaller die than those running in t

    • by vivaoporto ( 1064484 ) on Thursday March 29, 2007 @08:37AM (#18527363)
      I agree. Despite of the fact of AMD market share growing in the past 3 years, the most recent products coming from AMD are headed to beat the AMD ones, unless AMD takes a shift in the current direction and starts to follow AMD example. Nowadays, when I order my processors from my retailer, I always ask for AMD first, and only if the AMD price is significantly lower, I order AMD. I remember back in the days when you could only buy AMD processors, while now you can choose between AMD and AMD (and some other minor producers), isn't competition marvelous?

      From your truly,

      Marklar
    • by mosel-saar-ruwer ( 732341 ) on Thursday March 29, 2007 @08:52AM (#18527563)

      It seems that AMD has lost, and I'm not trying to troll. It just seems that fortunes have truly reversed and that AMD is being beaten by 5 steps everywhere by AMD. Anybody have an opposing viewpoint? (Being an AMD fan, I am depressed.)

      Look at the title of this thread: Intel Next-Gen CPU Has Memory Controller and GPU.

      The on-board memory controller was pretty much the defining architectural feature of the Opteron family of CPUs, especially as Opteron interacted with the HyperTransport bus. The Opteron architecture was introduced in April of 2003 [wikipedia.org], and the HyperTransport architecture was introduced way back in April of 2001 [wikipedia.org]!!! As for the GPU, AMD purchased ATI in July of 2006 [slashdot.org] precisely so that they could integrate a GPU into their Opteron/Hypertransport package.

      So from an intellectual property point of view, it's Intel that's furiously trying to claw their way back into the game.

      But ultimately all of this will be decided by implementation - if AMD releases a first-rate implementation of their intellectual property, at a competitive price, then they'll be fine.

    • by afidel ( 530433 )
      I wouldn't say they are beaten, at least for what I'm using them for. Here's [google.com] a little spreadsheet I created to do a cost/benefit analysis for Vmware ESX. There are some assumptions built in, and it's not yet a full ROI calculator, but it gets most of the big costs. Cell A1 is the number of our "standard" systems to be compared (4GB dual cpu 2003 machines). The DL580 is 4xXeon 7120 with 32GB of ram, local RAID1 on 15k disks, dual HBA's and a dual port addon NIC. The DL585 is 2xOpteron 8220HE with 32 or 64GB
      • We've got two 585's running ESX right now (4 dualcores per server). When our supplier installed the system for us they told us AMD's were better for running virtual than Intel CPU's. There is a significant difference in the choice of CPU.
        One colleague has been setting up two comparible servers with VMware server before and he also stated it ran much faster on the AMD form HP than the Intel from Dell.
  • So, basically... (Score:2, Interesting)

    by GotenXiao ( 863190 )
    ...they're taking AMD's on-die memory controller, AMD/ATi's on-die GPU and Sun's multi-thread handling and putting them on one chip?

    Have Intel come up with anything genuinely new recently?
    • Re: (Score:3, Insightful)

      by TheSunborn ( 68004 )
      If they manage to combine all these features in single chip, they really have made some genuinely new chip production process :}
    • by ari_j ( 90255 )
      If you do the same thing as everyone else but do it better, you don't have to come up with anything new. What new things do you really want in a CPU?
      • by maxwell demon ( 590494 ) on Thursday March 29, 2007 @08:51AM (#18527541) Journal

        If you do the same thing as everyone else but do it better, you don't have to come up with anything new. What new things do you really want in a CPU?
        The dwim instruction.
      • What new things do you really want in a CPU?

        One stable and open socket technology. So you can pop custom hardware accelerators or FPGA chips in the additionnal sockets in a multi-CPU mother board.

        Like AMD's AM2/AM2+/AM3 and hyper transport bus, with partners currently developping FPGA chips.

        Not like intel who change controller with each chip generation, at least twice to screw the custommers (423 vs. 478) The Slot 1 used during the Pentium II / III / Copermine / Tualatin era was a good solution to keep 1 in

      • by mgblst ( 80109 )
        If you do the same thing as everyone else but do it better, you don't have to come up with anything new.
         
        They are just like Microsoft....except for the better bit.

        The big thing that Intel do have going for them, is that they have been able to move to smallers processes for creating chips, which gives them a big advantage in speed and power usage.
    • It really doesn't matter. These are basic computing concepts, and anyone can draw up such an architecture. What's amazing about Intel is that they did it, and it looks like they have a killer chip in the making. Being an AMD guy, I hate to say that Intel is making me convert - and I not ready to forgive them for the P4 pipeline design.

      But all in all, its good news - now let's see what the other camp comes up with that will be 45 nm ready.
    • Re: (Score:3, Insightful)

      by GundamFan ( 848341 )
      Is it really fair to attribute the GPU-CPU combo to AMD/ATi if Intel gets to market first? As far as I know neither of them have produced anything "consumer ready" yet.
         
    • Looks like you've got one of those new computers that runs faster based on originality. I bet those Lian Li cases really make it scream then!
      • I'd say that making a chip that is hands down better than everything on the consumer market after years of being behind the eight ball (being "beaten" by a smaller competitor using dubious shenanigans no less) is pretty darn original.
    • Re: (Score:3, Insightful)

      by ceeam ( 39911 )
      They've come up with open-sourcing their GPU drivers, for example.
    • Well, the Intel IXP network processor had on chip memory controllers and integrated multiple different types of core onto the chip (specifically an ARM CPU for general purpose processing, and dedicated accelerator cores for network packet processing) and used fine grained multithreading within the network microengines. It's had this kind of set up for years (I remember reading about second generation IXP parts back in 2003). I doubt any of these concepts were genuinely new then either, but it's not strict
  • This is awesome. I'm just sitting here, waiting for more and more cores. While all the Windows/Office users whine that "it's dual-core, but there's only a 20% performance increase", I just set MAKEOPTS="-j3" (on my dual-core box) and watch things compile twice as fast. Add in the 6-12 MB of L2 cache these will have, and it's gonna rock. (my current laptop has 4 MB--that's as much RAM as my old 486 came with. (There. I got the irrelevant "when I was your age" comparison out of the way. (Yes, I know on

    • You know that things compile twice as fast in Windows, too, right?

    • Re: (Score:2, Funny)

      by billcopc ( 196330 )
      When I was your age, we used floppy disks as swap space.

      When I was your age, we overclocked our floppy drives.

      When I was your age, memory upgrades came with a soldering iron and a hand-written instruction sheet.

      When I was your age, L1 cache was just a really long wire loop with high capacitance.

      When I was your age, computers booted in about 2/10ths of a second.

      When I was your age, Compuserve was the world's biggest dial-up network :P (try THAT on for size!)

      When I was your age, we didn't let teenagers post
  • doesn't the quality of onboard graphics suffer from being directly on the mobo? I know there's a thriving market for sub $70 dollar graphics cards that replace onboard graphics for the sake of better image quality. Wouldn't having it on chip make this worse? I'd love to have onboard graphics (especially if I could get good tv out with it ) to save on heat/noise, but the stuff I've seen has been pretty lame.
    • by crow ( 16139 )
      No, the quality of onboard graphics do not suffer from being directly on the motherboard. The quality suffers because they typically use the cheapest solution for the onboard graphics, because they're targeting the business market--the onboard graphics are good enough for Office, so there's no need to buy a separate card.

      It probably doesn't make sense to put high-end graphics on the chip, because people in that market want to upgrade graphics more often than CPUs (not to mention that they probably want nVi
    • On board video suffers as it needs to use chip set io / cpu power to run and it needs to use slower ram then you find on real video cards as well as needing to share it with the rest of the system punting in the cpu may help but then if intel is still useing the FSB it may choke it up.
    • I guess the quality problems come from the analog part of the graphics hardware. I think the processor-integrated graphics would only cover the digital parts (i.e. doing 3D calculations etc.) while the analog parts (creating the actual VGA or TV signals) would still be handled by separate hardware.

      This "analog problem hypothesis" should be quite simple to test: Does onboard graphics image quality also suck when using a digital connector (e.g. DVI-D)? If I'm right, then it shouldn't, because in this case all
    • by jhfry ( 829244 )

      doesn't the quality of onboard graphics suffer from being directly on the mobo?

      Uhh... think about what your asking. Does placing the graphics processor closer to the source of it's information (ram) and on a much faster bus (CPU internal bus) make it slower?

      The reason onboard graphics suck on most machines is not because they are integrated, it's because the mobo manufacturers have no interest in integrating the latest and greatest video processors and massive quantities of RAM into a motherboard.

      Most onbo

      • by 0123456 ( 636235 )
        "The graphics can use system memory without the performance penalty currently associated with sharing system ram for graphics."

        Yeah, let's use a slow CPU to memory bus, shared by the CPU, peripherals and the video output, rather than a 30+GB/second GPU to memory bus on a typical graphics card these days.

        Sticking the GPU in the same package as the CPU is a way to decrease costs for highly integrated systems, not performance. Unless you're going to stick a really fast memory interface on the CPU, anyway.
  • by Zebra_X ( 13249 ) * on Thursday March 29, 2007 @08:38AM (#18527377)
    Intel has a lot of cash, and the ability to invest in expensive processes earlier than most. Certainly, earlier than AMD.

    However, it's worth noting, that these are clearly AMD ideas.
    * On die memory controller - AMD's idea - and it's been in use for quite a while now
    * Embedded GPU - a rip off of the AMD fusion idea, announced shortly after the aquisition of AMD.

    Intel is no longer leading as they have in yeas past - they are copying and looting their competition shamelessly. It appears that they are "leading" when point in fact it's simply not the case - had AMD not realeased the Athlon64 we would all still be using single processor NetBurst processors.
    • Re: (Score:3, Insightful)

      Intel is no longer leading as they have in yeas past - they are copying and looting their competition shamelessly. It appears that they are "leading" when point in fact it's simply not the case - had AMD not realeased the Athlon64 we would all still be using single processor NetBurst processors.

      Actually, Intel is leading on something very important, mobility and power consumption. Take a look at the Pentium M series. Laptops with the Pentium M series always outpaced the Athlon Turion series in both battery life and in speed, in most applications. Now we see Intel integrating that technology into the desktop CPU series.

      • by Zebra_X ( 13249 ) *
        I would say that AMD has let Intel lead in that segment. There are very few SKU's associated with the AMD's mobile segment. Being a smaller company, AMD chose to attack the server market first with the Opteron, and the high end PC market with the FX line. Both of those lines are driving innovation at AMD. The 3XXX 4XXX 5XXX and 6XXX lines as well as the turions are all reduced implementations of their server chips.
    • by jimicus ( 737525 )
      Intel is no longer leading as they have in yeas past

      Did they ever? Maybe for desktop PCs, but not for chips in general. The DEC Alpha chip was way ahead of anything Intel had at the time.
    • Intel is no longer leading as they have in yeas past


      Actually, I would say that of AMD.
    • From a market standpoint, yes AMD made the first in-roads with onchip mem controllers and now their integrated GPU (which they probably wouldn't be doing if not for purchasing a GPU manufacturer). From a technical standpoint, I don't think AMD really did anything others hadn't pondered already. It's not like examples of "integrated everythings" can't be found elsewhere.

      On either side this isn't a huge engineering breakthrough. It's simply trying to gain more business. Not that there is anything wrong wit
  • So it took Intel almost 9? years to integrate the Intel i740 GPU onto a CPU? I always wanted native DirectX 6.1 support right from the get-go!
  • by j00r0m4nc3r ( 959816 ) on Thursday March 29, 2007 @08:48AM (#18527499)
    I can't wait for the Frodo and Samwise chips
  • by sjwaste ( 780063 ) on Thursday March 29, 2007 @08:50AM (#18527527)
    In the meantime, you can get an AMD X2 3600 (65nm Brisbane core) for around $85 now, and probably in the $60 range well before these new products hit. The high end is one thing, but who actually buys it? Very few. I don't know anyone that bought the latest FX when it came out, or an Opteron 185 when they hit, or even a Core2Duo Extreme. All this does is push the mid- to low-end products down, and a ~$65 dual core that overclocks like crazy (some are getting 3 GHz on stock volts on the 3600) would seem like the best price/performance to me.

    AMD's not out because they don't control the high end. Remember, you can get the X2 3600 w/ a Biostar TForce 550 motherboard at Newegg for the same price as an E4300 CPU (no mobo), and that's the board folks are using to get it up to crazy clock speeds.
    • too bad intel's gpu's all suck huh? Gee.. whens the last time i bought an Intel video card... uhm... never.

      AMD and ATI have a better partnership. I'm still waiting for intel to try and buy Nvidia. With Nvidia's latest disaster known as the geforce 8800gtx, i'm curious if they're ready to sell out to intel.

      The 8800gtx performs like shit in opengl apps. an $80 apg ATI card out performs the geforce 8800gtx ($600) in opengl applications in XP.

      NVidia has released a driver for the geforce 8800's since jan. It s
  • Two problems (Score:4, Insightful)

    by tomstdenis ( 446163 ) <tomstdenis&gmail,com> on Thursday March 29, 2007 @09:06AM (#18527721) Homepage
    1. Putting a GPU on the processor immediately divides the market for it. Unless this is only going to be a laptop processor it probably won't sell well on desktops.

    2. Hyperthreading only works well in an idle pipeline. The core 2 duo (like the AMD64) have fairly high IPC counts, and hence, low amount of bubbles (as compared to say the P4). And even on the P4 the benefit is marginal at best and in some cases it hurts performance.

    The memory controller makes sense as it lowers the latency to memory.

    if Intel wants to spend gates, why not put in more accelerators for things like the variants of the DCT used by MPEG, JPEG and MPEG audio? or how about crypto accelerators for things like AES and bignum math?

    Tom
    • Re: (Score:2, Insightful)

      by jonesy16 ( 595988 )
      The point of this processor is that it will be modular. Your points are valid but I think you're missing Intel's greater plan. The GPU on core is not a requirement of the processor line, merely a feature that they can choose to include or not, all on the same assembly line. The bigger picture here is that if the processor is as modular as they are claiming, then they can mix and match different co-processors on the die to meet different market requirements, so the same processor can be bought in XY confi
    • 1: Integrated sells very well on the desktop almost every single machine in your big box shops has integrated graphics. I am sure it is outsells machines with separate graphics cards in the desktop. Gamers are not the market.

      2: I am skeptical about hyperthreading, but it all depends on the implementation. I don't think this is something they are pursuing just for marketing. They must have found a way to eek out even better loading of all execution units by doing this. I can't imagine this being done if it a
      • 1. Yes there are many integrated graphics cards, but most gamers won't use them. There is a huge market for gamer PCs (hint: who do you think those FX processors are made for?)

        2. Don't give Intel that much credit. The P4 *was* a gimmick. And don't think that add HTT is "free" at worst. It takes resources to manage the thread (e.g. stealing memory port access to fetch/execute opcodes for instance).

        In the case of the P4 it made a little sense because the pipeline was mostly empty (re: it was a shitty de
        • by guidryp ( 702488 )
          I just think it is too early to judge. Putting the Video on the die could offer sufficient performance benefits that only ultra high end graphics cards make sense. That is a small portion of the market.

          On HT edition two. I have a skeptical wait and see attitude. Though I will probably buy a new computer in 2007 so it doesn't matter to me for a long time as I will probably squeeze 5 years out my next machine. So 2007 and 2012 are the years that interest me. :-)
    • Will if there can be some kind of a sli / cross fire set up with the cpu based gpu and video card where the display is plugged into but this more likely on a amd system and the upcome amd desktop chip sets is listed as supporting HTX slots so you can run this over the HT bus.
    • Won't sell well on desktops? What about office users? What about people who don't care about gaming? I'm sure it'll be enough to run Aero Glass, which is probably enough for most people.

      • even my box at the office has a PCIe addon GFX card. The onboard nvidia was just too buggy (cursor didn't work, lines would show up, etc) even with the tainted nvidia drivers. I bought a low end 7300 PCIe card and problems solved.

        What happens when you hit a limitation/bug in the Intel GPU?

        Also, don't misunderestimate :-) the revenue from the home/hobby/gamer market. The R&D cost of most processors is paid for by the gamer/server costs. AMD for instance, doesn't pay off the R&D/fab costs by selli
    • by Ant P. ( 974313 )
      Doing common operations in hardware sounds like a much better plan than just throwing more general-purpose processing at it.

      I really wish they'd add hardware accel for text rendering, considering it's something everything would benefit from (using a terminal with antialiased TTFs is painfully slow). There's supposedly graphics cards that do this, but I've never come across one.
      • Text rendering sounds like a job for the GPU not CPU. Things like DCTs [for instance] could be done in the GPU, but they're more universal. For instance, you can do MPEG encoding without a monitor, so why throw a GPU in the box?

        Crypto is another big thing. It isn't even that you have to be faster, but safer. Doing AES in hardware for instance, could trivially kill any cache/timing attacks that are out there.

        Tom
  • Bursts of CPU (Score:3, Interesting)

    by suv4x4 ( 956391 ) on Thursday March 29, 2007 @09:17AM (#18527853)
    I can see those being quite hot for servers, where running "many small" tasks is where the game is.

    On a desktop PC you often need the focused application (say, some sort of graphical/audio editor, game, or just a very fancy flash web site even) to get most of the power of the CPU to render well.

    If you split the speed potential in 16, would desktop users see actual speed benefit? They'll see increased responsiveness from the smoother multitasking of the more and more background tasks running on our everyday OS-es, but can a mostly single-task focused desktop usage really benefit?

    How of course, we're witnessing ways to split concerns of a single task application into multiple threads: the new interface of Windows runs in a separate CPU thread and on the GPU, never mind if the app itself is single threaded or not. That's helping.

    Still, serial programming is, and is going to be, prevalent for many many years to come, as most tasks a casual / consumer applications performs are inherently serial and not "paralelizable" or whatever that would be called.

    My point being, I hope we'll still be getting *faster* threads, not just *more* threads. The situation now is that i's harder harder to communicate "hey we have only 1000 threads/cores unlike the competition which has 1 million, but we're faster!". It's just like AMD's tough position in the past, explaining their chips are faster despite having slower clock-rate.
    • All "things" you do are serial, you can just have multiple serial tasks going at once. This isn't hardware where we get to make up data paths and insert logic wherever we want.

      This is the second post I've seen along these lines and I'm beginning to think people really don't understand what software is or how processors work... Even in the slightest.

      A processor can't just magically decide to have, say two multipliers in parallel just because your task demands it. You can do that in hardware because you are
  • by Doc Ruby ( 173196 ) on Thursday March 29, 2007 @09:29AM (#18528007) Homepage Journal
    OK, these new parallel chips aren't even out yet, and software has to get the hardware before SW can improve to exploit the HW. But the HW has all the momentum, as usual. SW for parallel computing is as rudimentary as a 16bit microprocessor.

    What we need is new models of computing that programmers can use, not just new tools. Languages that specify purely sequential operations on specific virtual hardware (like scalar variables that merely represent specific allocated memory hardware), or metaphors for info management that computing killed in the last century ("file cabinets", trashcans of unique items and universal "documents" are going extinct) are like speaking Latin about quantum physics.

    There's already a way forward. Compiler geeks should be incorporating features of VHDL and VeriLog, inherently parallel languages, into gcc. And better "languages", like flowchart diagrams and other modes of expressing info flow, that aren't constrained by the procedural roots of those HW synthesis old guard, should spring up on these new chips like mushrooms on dewy morning lawns.

    The hardware is always ahead of the software - as instructions for hardware to do what it does, software cannot do more. But now the HW is growing capacity literally geometrically, even arguably exponentially, in power and complexity beyond our ability to even articulate what it should do within what it can. Let's see some better ways to talk the walk.
    • The problem with this fantasy is that we have to target a variety of platforms, not all of which have parallel processing capabilities.

      Also, we already have threading capabilities that are trivial to make use of. If you're talking about *vectorization*, then yeah, that's not well supported in a portable fashion. But threading? pthreads makes that trivial.

      What features from verilog would you want? Concurrent expressions? You realize how expensive that would be ?

      a = b + c
      d = c + e

      sure that makes sense in
  • More information (Score:3, Informative)

    by jonesy16 ( 595988 ) on Thursday March 29, 2007 @09:37AM (#18528125)
    http://www.anandtech.com/cpuchipsets/intel/showdoc .aspx?i=2955 [anandtech.com] provides a much more detailed look at the new processor architectures coming from Intel. A little better than the PR blurb at ars'.
  • by gillbates ( 106458 ) on Thursday March 29, 2007 @09:54AM (#18528363) Homepage Journal

    It is interesting to note that Intel has now decided to put the memory controller on the die, after AMD showed the advantages of doing so.

    However, I'm a little dismayed that Intel hasn't yet addressed the number one bottleneck for system throughput: the (shared) memory bus itself.

    In the 90's, researchers at MIT were putting memory on the same die as the processor. These processors had unrestricted access to its own, internal RAM. There was no waiting on a relatively slow IDE drive or Ethernet card to complete a DMA transaction; no stalls during memory access, etc...

    What is really needed is a redesign of the basic PC memory architecture. We really need dual ported RAM, so that a memory transfer to or from a peripheral doesn't take over the memory bus used by the processor. Having an onboard memory controller helps, but it doesn't address the fundamental issue that a 10 ms IDE DMA transfer effectively stalls the CPU for those 10 milliseconds. In this regard, the PC of today is no more efficient than the PC of 20 years ago.

    • Do you have any idea how large a 1GB dual-port stick of ram would be? It'd probably cost 2 to 3x as much to make.

      Also DMAs don't take nearly that long to fulfill. This is how you can copy GBs of data from one drive to another and still have a huge amount of processor power to use. The drives don't lock the bus while performing the read, only when actually transferring data. Otherwise, if you locked the bus for 10ms that means you can't service interrupts, say the timer. Which means your clock would be
  • This has been coming for a while, and shouldn't surprise anybody. I was expecting it to come from NVidia, though, which had been looking into putting a CPU on their graphics chips and cutting Intel/AMD out of the picture. Since they already had most of the transistor count, this made sense. They already had the nForce, which has just about everything but the CPU and RAM (GPU, network interface, disk interface, audio, etc) on one chip. But they never took the last step. Probably not because they couldn

To the landlord belongs the doorknobs.

Working...