Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Launches New Chipset 127

mikemuch writes "The new P35 and G33 chipsets, codenamed 'Bear Lake' are now available. They have a new memory controller that supports DDR3 RAM at up to 1333MHz, a new southbridge, and will support the upcoming 45nm Penryn CPUs. They don't yet have an actually new and different GPU — their GMA 3100 is pretty much the same as the GMA 3000 of the G965 chipset." For a little more technical info you can also check out the Hot Hardware writeup.
This discussion has been archived. No new comments can be posted.

Intel Launches New Chipset

Comments Filter:
  • What's Different (Score:4, Insightful)

    by Nom du Keyboard ( 633989 ) on Monday May 21, 2007 @01:13PM (#19211129)
    will support the upcoming 45nm Penryn CPUs.

    What does Penryn need that's new and different in the way of support? Is it just a bump in FSB speed?

    • Re:What's Different (Score:4, Informative)

      by morgan_greywolf ( 835522 ) * on Monday May 21, 2007 @01:32PM (#19211399) Homepage Journal
      Well, for one, Intel's biggest instruction set change in 5 years: SSE4 extensions [com.com], an updated to Intel's SIMD instruction set.

      I know. I'm not all the excited, either. :)
      • by dfghjk ( 711126 )
        In what way do the new instructions require new chipset support? That is what he asked after all.
        • Re: (Score:3, Interesting)

          by TheRaven64 ( 641858 )

          Rumour has it (I haven't kept up, so maybe rumour had it) that SSE4 would include scatter-gather instructions. These allow you to specify multiple memory addresses to be loaded into the same vector. This makes auto-vectorisation much easier for compilers, since your memory layout no longer has to be designed with vectorisation in mind.

          If this is true, then it might need co-operation from the memory controller to work effectively. Since Intel's memory controllers are on the north bridge chip, it would

      • by DrYak ( 748999 ) on Monday May 21, 2007 @02:03PM (#19211745) Homepage
        A chip set is just supposed to talk to the CPU, and in case of Intel's architecture, talk to the memory.

        A new chipset for DDR3 is logical in this situation : the chipset has to handle a different and electrically incompatible memory.

        But why does a new CPU needs a newer Chipset ?!?!?

        Meanwhile, in AMD's land, there's a standard between the chipset and the CPU called Hypertransport.
        As long as both the CPU and the chipset follow the same protocol or compatible variation of (like AM2 being HT/2.0 and AM2+ and AM3 being HT/3.0) you can pretty much pair any thing you want.
        The only restriction for a mother board is to have compatible socket (the CPU has on-board memory controller and directly speaks to the RAM sticks. There are different sockets type for different memory combination : 794 is for single channel DDR, 939 is for dual channel DDR, AM2 is for DDR2, Opteron F is for DDR2 and much higher number of Hypertransport lanes), and even that is getting stabilised (future AM2+ and AM3 CPUs can plug in today's AM2 board).

        Why can't Intel guarantee the same kind of stability ?!?!?

        Oh, yes, I know : they make chipsets and earn money by selling more motherboard.
        Even back at the Pentium II/III era they have gone through the same cycle, releasing several incompatible chipsets and slot/socket formats in order to pump up motherboard sales, even if the same slot-1 PII motherboard could last until the last PIII only using adapted slotckets.

        Meanwhile AMD is getting recommended on various website (like Ars Technica) as preferred solution for entry-/middle- level machines, because of cheaper board and more stable (and upgradable) hardware.

        Stability of AM2/AM2+/AM3 is one of biggest AMD's advantage over LGA775 and should be put forward.
        • The Slot 1 business was just a temporary thing, pretty much forced by the need for high-clock cache. The silicon manufacturing technology did not yet allow for affordable large on-die cache. I thought the slot/socket adapter was very good idea. I don't think such an equivalent was offered for adapting a socketed Athlon into a slot, which early Athlons had slots, later it was socketed.

          Stability of AM2/AM2+/AM3 is one of biggest AMD's advantage over LGA775 and should be put forward.

          What do you mean by "sho
          • Anyway, it would be nice to have a broader upgrade range. While AMD's pattern is superior, it's still far from ideal, especially when they too have socket variations.

            I agree on this point. Athough, as I said, there's aa good commitment coming from AMD of stabilising the AM2/AM2+/AM3 family, we could hope even better.

            Now that the on-CPU-die memory controller has definitely decoupled the CPU/Memory (the fast evolving part) from the northbridge/motherboard (much more constant - except maybe for the graphical c

        • by mrchaotica ( 681592 ) * on Monday May 21, 2007 @02:32PM (#19212103)

          Meanwhile, in AMD's land, there's a standard between the chipset and the CPU called Hypertransport.

          Note that that's not just "AMD land," that's IBM land, VIA land, Transmeta land, HP land, SUN land, and every-other-chip-manufacturer-except-Intel land.

        • by meatpan ( 931043 ) <meatpan@nosPAM.gmail.com> on Monday May 21, 2007 @02:56PM (#19212437)

          Oh, yes, I know : they make chipsets and earn money by selling more motherboard.
          As a former Intel employee, I can guarantee you that Intel does NOT make money from chipsets and motherboards. The entire purpose of Intel's server and desktop motherboard operation is to enable their new technology through early discovery and elimination of major processor bugs, and to help the actual motherboard/chipset manufacturers to better support Intel architecture.

          Why would Intel invest in chipsets and motherboards when the profit margins are slim (as compared to much higher profit margins for a cpu)? For one, the investment in chipsets and motherboards has saved the company from major disasters on several occasions by early detection of obscure bugs. Knowledge of internal problems can allow the company to delay or cancel a product (such as Timna [pcworld.com]), which is much less harmful to a stock price than shipping a broken product.

          By the way, divisions within a company that constitute a material [wikipedia.org] portion of earnings are required to report their revenue. If you want to know whether or not Intel makes money from chipsets, you can look it up in public records.
          • Maybe that's why Asus has such sketchy support compared to Intel. Their site is horrid and they released my present mobo (P5W HD) without without testing it with retail C2D cpu's (they used engineering samples). When motherboards wouldn't boot with the cpu's (as advertised) the solution Asus had was to have people buy a new Celeron cpu's so they could update the BIOS and THEN the C2D would work. My next mobo will be intel. However with all manufacturers I learned not to rush to buy new board because many of
            • Actually both Asus and Gigabyte are shipping boards built using engineering samples (!!!). This is visible in the VR-Zone [vr-zone.com] and OCWorkbench [ocworkbench.com] reviews, with the chips marked "Secret" "ES". This is a very dubious way to build a retail product.
            • However with all manufacturers I learned not to rush to buy new board because many of them have headaches the first few months/revisions.

              The real lesson you should've learned is to always buy CPU+MB+RAM in bundled form. Where the retailer has already put the 3 components together and guarantees that they work. (MWave charges all of $9 for the service.) With a motherboard bundle, you eliminate all of the guesswork and you're sure to get a working setup. Some places call this an "assemble & test" o
          • This is Slashdot, where all companies are evil, and conspiracies rule the day. Oh, and AMD is always better for some reason.
        • Re: (Score:2, Insightful)

          by darkwhite ( 139802 )

          Why can't Intel guarantee the same kind of stability ?!?!?

          You've got to be fucking shitting me. What are you high on? Because I'd like some of that. I can't see a single statement in your post that isn't absurd and that doesn't turn the truth on its head.

          There are plenty of reasons to favor AMD over Intel, but sockets are not one of them.

          Have you checked the longevity of LGA775, the only desktop and entry-level server socket that matters? And have you compared that to the longevity of AMD's sockets? Have you read the fucking article? Have you looked at Intel's CP

        • "Stability of AM2/AM2+/AM3 is one of biggest AMD's advantage over LGA775 and should be put forward."

          Are you SERIOUSLY trying to say that 3 separate AM- systems are more stable than one socket?

        • But why does a new CPU needs a newer Chipset ?!?!?

          I looked for a definitive answer from nVidia or eVGA, but it's not clear whether nForce 680i boards will support Penryn/Wolfdale or not. FWIW, a moderator at the eVGA forums thinks they will [evga.com], but nobody knows for sure.

          So unless Intel says otherwise, chipsets from other vendors may work with Penryn, regardless of Intel's chipset refresh for DDR3. I mean, most enthusiast boards do well over 1333 MHz FSB, and also have fine-grained voltage adjustments. Unless t

        • One wonders why Intel gets any blame for these designs. Games and graphics cards require one to upgrade those many times more than one has to upgrade the motherboard itself. What is the "preferred solution for entry-/middle- level machines" for what audience?

          My old BT2 board still performs as good as current boards on the market.

          That preferred solution probably doesn't represent an average between GPU driven and CPU driven markets. Hmm.
      • SSE 4 (Score:4, Informative)

        by serviscope_minor ( 664417 ) on Monday May 21, 2007 @02:05PM (#19211781) Journal
        Wikipedia has a more useful description of SSE4 [wikipedia.org]

        As far as I know, gcc only supports up to SSE3 intrinsics. Look in pmmintrin.h

      • Re: (Score:3, Interesting)

        by IPFreely ( 47576 )
        Well, for one, Intel's biggest instruction set change in 5 years: SSE4 extensions, an updated to Intel's SIMD instruction set.
        Really? I would have thought going 64 bit would be considered a slightly larger instruction set change than SSE4.

        Maybe it does not count since it was an AMD invention rather than an Intel invention?

        • Considering Intel invented x86 decades ago and AMD just copied that, I'd consider them even. Intel unsuccessfully tried to move the world off of the ancient x86 instruction set, and lazy companies like Microsoft decided to put their weight behind the 64-bit x86 hack that AMD offered. Now we get to suffer another 20 years of working around x86.
          • by IPFreely ( 47576 )
            X86_64 is certainly a cludge on a cludge. But Itanic is no better, and in many ways worse. Given the choice of the two, MS made the right one.

            If you really want to get off of the bad hardware, you could maybe go with POWER or Alpha or go invent something else completely new. MS tried with Alpha for a while, but noone bought it. So it looks like it's really our fault, not theirs.

            • MS tried with Alpha for a while, but noone bought it. So it looks like it's really our fault, not theirs.

              Amen. Microsoft supported x86, MIPS, PowerPC, and Alpha with the first release of NT 4.0. Nobody bought PowerPC and MIPS, and very very few people bought Alpha. So by the time Win2k came around, Windows was x86 only. I really hoped Alpha or PowerPC would succeed and get us off the multilayered hack that is x86, but the masses did not agree with me.

              The good thing about this debacle for MSFT is that toda

          • Considering Intel invented x86 decades ago and AMD just copied that, I'd consider them even. Intel unsuccessfully tried to move the world off of the ancient x86 instruction set, and lazy companies like Microsoft decided to put their weight behind the 64-bit x86 hack that AMD offered. Now we get to suffer another 20 years of working around x86.

            Nothing lazy about it.

            Intel offered up the Itanium. A 64bit platform that ran existing 32bit code in slow-motion mode. Folks making purchase decisions looked at
    • Re:What's Different (Score:4, Informative)

      by dgoldman ( 244241 ) on Monday May 21, 2007 @01:34PM (#19211427)
      Voltage is lower. Existing (pre-P35) boards won't support the Penryn.
      • >Voltage is lower. Existing (pre-P35) boards won't support the Penryn. Great, so that means my Asus Striker Extreme (which allows one to set the voltage in 0.01v increments from 0.5v to 3.0v, or something very similar to that) will support Penryn with a simple vdrop. Excellent. ... Yeah, right.
      • Voltage is lower. Existing (pre-P35) boards won't support the Penryn.

        Do you have a link for that? (Preferably from Intel, or a motherboard vendor, or a review site that talked to Intel) Because I can't find anywhere that old motherboard incompatibility is stated definitively.
    • Sleep States (Score:3, Informative)

      by lmnfrs ( 829146 )

      Penryn does C6 [google.com]. I don't know which, if any, requirements are satisfied in current boards.

      The subsystems of the board (buses, controllers, GPU, etc.) need to function by themselves while the processor is off. I'd imagine there are also certain hardware requirements to bring the CPU out of C6 that the new boards provide.
      The average enthusiast probably doesn't need outstanding battery life, it's just a nice extra. But for business/professional uses, this is a very welcome development.

    • Quote: "VRM 11 Required For 45-nm Processors" "One reason for varying processor support is the voltage regulator circuit of 3-series motherboards. It needs to be VRM 11.0 compliant, which is key when it comes to 45-nm processor support. Let me say that the problem isn't decreasing voltage levels, but strong power fluctuation due to millions of transistors clocking up and down, or switching on and off. Remember that future quad-core processors will be able to dynamically adjust clock speeds for each core ind
  • by Anonymous Coward
    Really, it would be nice if we can get a external gfx (pcie) for "our" systems.
    • Re: (Score:3, Interesting)

      by jandrese ( 485 )
      You want Intel Graphics as a actual video card? You are aware that you can buy low end nVidia and ATI cards for less than $50 that will outperform them right?
      • Re: (Score:1, Interesting)

        by Anonymous Coward
        But do those nVidia and ATI cards have open source drivers? The Intel chips do!

        Intel is therefore the best for Linux.
        • Yep -- a GMA 950 will outperform even a GeForce 8000 GTX, when the GeForce is using the nv driver!

        • Nvidia cards have an open-source driver, called "nv".

          Of course, it's 2D-only, so those fancy Nvidia cards are basically worthless for 3D video if you want all open-source drivers.

          Kudos to Intel for releasing open-source video drivers; I just wish they'd make stand-alone PCIe boards with their chips.
        • But do those nVidia and ATI cards have open source drivers? The Intel chips do!

          Do Intel's X3000 open-source drivers have support for the T&L units and vertex shaders built-in to the X3000? The Windows drivers certainly don't, [intel.com] even after 9 months on the market!

          Almost 3 months ago, beta drivers were promised, but they have yet to surface. The X3000 is still using the processor to perform vertex shading / T&L, just like the GMA 900 / 950, and that's why it still gets beat by the old Nvidia GeForce 61
      • by awb131 ( 159522 ) on Monday May 21, 2007 @01:55PM (#19211687)
        > You want Intel Graphics as a actual video card? (sic)

        Well, not really, no. But huge numbers of run-of-the-mill business PCs, plus the Apple "consumer" line (mini, imac and macbooks), use the standard Intel graphics hardware. It does OK for most people's purposes, and the install base is huge, and for those reasons, a bump in capabilities for the onboard graphics chip would be noteworthy.
      • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday May 21, 2007 @01:57PM (#19211695) Homepage Journal

        I wouldn't go redundant like this but both of the other replies are from ACs and many people will never read them/know they exist.

        So far, Intel is the only company with supported OSS drivers. AMD has "promised" to deliver them for ATI cards, but who knows how long that will take? And nVidia has made no such promise.

        In addition, if we could get them without shared memory, the performance would likely improve and it wouldn't drag down system performance. So that would be a great thing.

        When we get OSS drivers for ATI, it might become possible to use one under Linux (or any other OS but MacOS for which Apple participates in driver development) in a reliable fashion. But ATI's drivers are poop anyway. Regardless, those who want a 100% OSS system can not buy a current nVidia card, as they are unsupported; an older nVidia card still in production is likely to come from one of the least-reputable vendors, so a card supported by the 'nv' driver that's worth using will be hard to come by. Intel is currently the only credible choice for accelerated video with OSS drivers.

        • But ATI's drivers are poop anyway.

          Which is probably why they have claimed they will do it. Their drivers stink. If they can get people to code up quality drivers for little to no expense, suddenly they are much more competative with NVIDIA plus they've bought mindshare in the OSS community.
          • But ATI's drivers are poop anyway.

            Which is probably why they have claimed they will do it. Their drivers stink. If they can get people to code up quality drivers for little to no expense, suddenly they are much more competative with NVIDIA plus they've bought mindshare in the OSS community.

            Yeah, it sounds like a win-win situation for ATI. All the proprietary, encumbered code in the world hasn't enabled them to create drivers that are worth one tenth of one shit. I've kept trying ATI off and on over the

            • Well, it does make sense. It's all about not reinventing the wheel over and over again, and working together with others to create something greater than any one of you could alone.

              The only people getting shafted by open-source software are the proprietary software vendors that compete directly against OSS solutions. Too bad, so sad. Time to move on to something else; I don't see Cadence, MentorGraphics, or AutoDesk complaining much about OSS software cutting into their business. Although AutoDesk bette
        • I wouldn't expect that to change. ATI's open drivers will give limited functionality. There will be no 3D acceleration with those drivers.
        • by mczak ( 575986 ) on Monday May 21, 2007 @02:56PM (#19212457)

          In addition, if we could get them without shared memory, the performance would likely improve and it wouldn't drag down system performance. So that would be a great thing.
          I don't know how much faster the "same" graphic chip would be if it just would get its own ram, but that igps drag down system performance is basically a myth nowadays. Used to be true when single-channel sdram was best you could get, but it's basically a non-issue with todays dual-channel ddr2 memory systems. (Bandwidth needed for scanout, which is basically what slows things down even if you don't do anything graphics related, hasn't increased that much - a 1920x1200x32 display at 60Hz would need roughly 500MB/s, if you have a chipset which provides 1066MB/s (single-channel 133Mhz sdr sdram) this is a lot but if you have a chipset which provides 12.8GB/s (dual-channel ddr2-800) it's just not that much.)
          • a 1920x1200x32 display at 60Hz would need roughly 500MB/s, if you have a chipset which provides 1066MB/s (single-channel 133Mhz sdr sdram) this is a lot but if you have a chipset which provides 12.8GB/s (dual-channel ddr2-800) it's just not that much.)

            The problem isn't just one of overall bandwidth use, but also one of contention. Further, when used for 3D the memory consumption will be greater because not only graphics memory but also texture memory is in system RAM.

            And of course, you don't actually get

            • by mczak ( 575986 )

              The problem isn't just one of overall bandwidth use, but also one of contention.

              Sure, but that can be dealt with. I'm not exactly sure how current chipsets handle it, but they certainly have some cache for display buffer, together with some logic for prioritization (if the display buffer is full, requests from the display controller to the memory controller have low priority, if it gets more empty priority will increase).

              Further, when used for 3D the memory consumption will be greater because not only graphics memory but also texture memory is in system RAM.

              Yes but as said, that doesn't count as "drags down system performance". It will "only" drag down 3d performance. It will eat some ram, true, but as long as you have

              • Notice the non-3d benchmarks are within one percent if igp is enabled or not - though the resolution isn't stated (I'd guess it was 1600x1200 but I could be wrong...). I never said igps are fast for 3d :-).

                This is irrelevant because the maximum load to the system only occurs when the system is doing 3D graphics.

                What we need is two versions of essentially the same graphics card, one IGP and one standalone. But good luck finding that.

                Or more to the point, we need to run another benchmark while a 3d benchma

        • Re: (Score:2, Informative)

          by abundance ( 888783 )
          [quote]In addition, if we could get them without shared memory, the performance would likely improve and it wouldn't drag down system performance. So that would be a great thing.[/quote] I think the shared memory issue is a bit overweighted.
          With current memory speeds and dual channel bandwidth, system memory can handle the additional traffic load of the graphic subsystem without suffering that much. And for what concerns 3D graphic performance of those budget cards, that's mainly gpu bound, not memory bo
          • Of course, if you have a low specced system, say with 512mb of ram, even those 80mb that the graphic card steal for itself hurts. But a 512mb configuration is quite doomed nowadays by itself.

            512MB is adequate for Windows XP (although you will notice a change to 1GB) and for OSX (ditto) but not enough for Vista. It's more than enough for any Linux you care to use.

            Note that the only OS with which you are actually doomed with 512MB is Vista...

            Now, with that said; OSX is slow no matter how much RAM you have,

            • Vista (Ultimate Edition) pretty much dictated 1 GB RAM for me, initially, and started to behave at the 2 GB mark. Once I moved to the 64-bit version of Vista, I went up to 4 GB and so far it just purrs. XP Pro SP2 needed about 2 GB to really rock.
            • Note that the only OS with which you are actually doomed with 512MB is Vista...

              Mmh ya, you're right, the "quite doomed" thing was... quite stretched. =)
              After all my main machine at home is also an XP2500+ with just 768mb and it's okay with Windows XP. My sister still use a Duron 600 with 512mb and she's fine surfing and writing.

              My point was about the shared memory stuff - its drawbacks would kick only with a limited amount of system memory, and you'd better off simply adding ram than thinking about discrete memory for the integrated graphic card.
              And anyway I think nobody would

              • Yeah. My sister is still using a thrown together Celeron 533 with 256mb of RAM (running Windows XP no less) and she generally hasn't complained about it to me. It lets her check email, download digital camera photos, and surf around the web. That's all a lot of people care about.

                My Windows system was honestly a bit sluggish for my tastes with 1GB though (not so much within an app as it was SWITCHING between apps. If I was playing WoW and tabbed out to check something on the web, the system ground to a c
  • Beer Lake? (Score:3, Funny)

    by Zwets ( 645911 ) <jan DOT niestadt AT gmail DOT com> on Monday May 21, 2007 @01:23PM (#19211267) Homepage

    *hic* Best name evar!

    ..oh, wait.

  • by Anonymous Coward
    1333MHz DDR3 RAM should be fast enough for anyone.
  • by EconolineCrush ( 659729 ) on Monday May 21, 2007 @01:37PM (#19211463)
    The Tech Report also has coverage, with full application and peripheral performance testing: http://techreport.com/reviews/2007q2/intel-p35/ind ex.x?pg=1 [techreport.com]
  • by Doc Ruby ( 173196 ) on Monday May 21, 2007 @01:41PM (#19211507) Homepage Journal
    Intel devastated the entire DSP industry in the late 1990s when they staked out the NSP ("Native Signal Processing") strategy of faster clockrates to run DSP in SW instead of in HW. But now they're up against new Cell chips from IBM which multiprocess with parallel DSPs onchip, and even GSPs ("Graphics Signal Processors") threaten new competition from first nVidia, then TI and other old surviving rivals, as GPGPU techiniques become more sophisticated and applicable.

    All because DSP is more parallelizable than true general purpose processing, as parallelization is the best solution to increasing CPU power, just as the data to be processed is inherently more parallel, and more simply streams of "signals", as multimedia convergence redefines computing.

    So when will Intel reverse its epoch of NSP, and deliver new uPs with embedded DSP in HW?
    • Re: (Score:3, Interesting)

      by AKAImBatman ( 238306 ) *

      So when will Intel reverse its epoch of NSP, and deliver new uPs with embedded DSP in HW?

      Probably about the same time that web application developers realize that their problems (particularly AJAX) can be solved more efficiently with a DSP architecture and start designing tiers of servers in a pipelined DSP configuration. Considering the amount of computer science exhibited by this industry, I'd peg it at sometime around a quarter to never.
      • Re: (Score:3, Insightful)

        by Doc Ruby ( 173196 )
        Intel putting parallel DSPs on their uPs is not driven by mere "efficiency". Intel demonstrates, even defines, both the CS and economics that are forcing competitors, thereby Intel, too, to put DSP in their cores (literally and figuratively).

        And there's not much sense in Web apps being processed by an FIR or full-spectrum mixer.

        All I really get from your comment is that you don't know what DSPs do, or what Intel does - or maybe how Web apps work.
        • Re: (Score:3, Interesting)

          by AKAImBatman ( 238306 ) *
          All I get from your comment is that you weren't following what I was saying.

          Intel's designs are driven by what drives the sales of their processors. For right now, that's gobs of desktop and PC Server machines. The alternative architectures are in no danger of knocking Intel out of that position. They will carve themselves a niche for now, which is why Intel has been more worried about AMD than they've been worried about IBM. Which means that Intel will sit up and take notice of the DSP-oriented chips if an
          • by Doc Ruby ( 173196 ) on Monday May 21, 2007 @04:32PM (#19213667) Homepage Journal
            In our universe, everything can be characterized as a signal. In our society, practically all signals can be usefully digitized. That doesn't mean DSPs are right for everything, because DSPs aren't good at all signal processing, just some - repeated loops of simple, if cumbersome, linear equations.

            DSP is fast math at the expense of fast logic. Web apps have at least as much logic as math, intractably intertwined. DSP of Web apps is inappropriate. DSPs on a chip with fast logic would be good for Web apps and everything else. Intel sells lots of CPUs to process Web apps. And IBM/Toshiba/Sony is planning to sell lots of Cells to do so.

            I know what you're talking about. And I know that you don't.
    • What is a DSP really? Know it stands for digital signal processor, but not really sure what that means. All that comes to mind is sound synthesizers or something that takes in some kind of signal and changes it's outcome. They seem really popular.
      • DSP [wikipedia.org] is indeed a "Digital Signal Processor". It's a chip specialized for... processing digital signals. Which nowadays nearly always means running a lot of repeated simple linear transformations of the same basic form on a stream of data. Usually it's a "Multiply/Accumulate" ("MAC" or "MADD"), of the form y=m*x+b, run very fast (billions of times a second), with lots of arithmetic support like zero-delay increments/looping. Also usually at the sacrifice of some performance, or even existence, of some logic o
    • Why would Intel ever reverse it's position on NSP, when embedded DSP in HW can't hold enough uP's for GSP's to effectively compete with the GPGPU technique's employed by NSP? Someone please correct me if I'm wrong, but it seems top me that as long as NSP's GPGPU'd uPs work at ~115% of the DSP from Nvidia, then Intel will have the corner market.
      • Sony's put Cells into over 3 million PS3s. The Cell is a 3.5GHz PPC on a cache-coherent onchip bus with 6 usable DSPs for over 200GFLOPS. IBM is putting together workstations to supercomputers of these same (and denser) Cells in parallel. Other chipmakers are following suit. These are not GPUs, but CPUs with embedded DSP that can process graphics: "NGPU", if you will. But actual GPUs use even more embedded DSPs to get something like 10x the specialized performance.

        NSP is the way to go only when there's CPU
  • by More_Cowbell ( 957742 ) on Monday May 21, 2007 @01:59PM (#19211713) Journal
    Thanks, but I think I will wait for the next chipset ... that can support ram to 1337MHz.
    • Re: (Score:2, Funny)

      by Anonymous Coward
      I actually worked at Intel during the development phase of the 1333MHz FSB and memory busses, and there was a little discussion among some teams as to whether we should bump it to 1337MHz for the gaming crowd. Most people took it as a joke, but I know some of us were serious - why the hell not, you know?
  • Intel should be applauded for supporting both DDR 2 and 3 on the same chipset, but this isn't anything like the i810 debacle is it? Where the memory controller ended up barely supporting RD-RAM, just so that you could plug in slower-than-anything SD-RAM.
  • Seriously? What is the point of giving new hardware/software codenames? We've all seen "Longhorn", "Revolution", and others and nobody ever said, "gee, I wonder what that is?" Why can't they just say "The next windows", "The next Nintendo", or "The next Intel Chip?" Damn marketing FUD... Sorry, needed to rant.
    • Are you serious? It really hasn't occurred to you that there are almost always future projects in the pipeline, and simply resorting to calling one the "next" would be completely ambiguous?

      Code names are mostly for internal use anyway. It's a way of referring to a project before marketing gets a hold of it and names it something officially.

      I can't believe I need to explain the purpose of code names to someone on Slashdot.
    • by edwdig ( 47888 )
      Because "The next Intel Chip" isn't very descriptive. They have several designs in progress at once. It gets confusing when your project is "The next next next Intel Chip."

      Also, not all designs get released. It confuses all references to "the next next next Intel Chip" when "the next next Intel Chip" gets canceled (see 4.0ghz P4). Likewise, "The next Windows" isn't very descriptive, as MS has separate desktop and server lines. Windows ME would've thrown everything off, as there wasn't originally supposed to
  • by eviljav ( 68734 )
    Did anyone happen to see if these chipsets support ECC ram? The hardware review sites didn't mention if they do or not, but that's usually not something they seem to check. (Or if they do check they just whine about not being able to overclock or something)
  • This new chipset supports 1067 MHz DDR-3 max. 1333 MHz is the CPU bus speed. This chipset will probably be revved to officially support 1333 MHz RAM, but not yet.

    But, as many have already discovered, the previous P965 chipset can be made to support DDR-2 faster than its specced 800 MHz, and processors above its specced 1067 MHz, so 1333 MHz RAM will PROBABLY work just fine with minor BIOS tweaking, but its still unofficial.

    I'm waiting for X38, with its dual X16 PCI-E 2.0 slots, among other improvements.
  • I just read the techreport article this morning.

    First off the PCI-e 2.0 support is apparently in the X38 'enthusiast' chipset so that's one scratch.
    Also the P35 seems fairly good with DDR3 but it's a hell of a price premium and certainly not an insane speed bump.
    On top of this, the P35 supports the 45nm Penryn CPU, guess what? the 965 chipset also supports Penryn if the boards are designed with this in mind, some may not work, some may only need a bios update - but you will see Penryn working on 965 boards

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...