Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel To Ship 48-Core Test Systems To Researchers 135

MojoKid writes "Just when you thought your 6-core chip was the fastest processor on the planet, Intel announces plans to ship systems equipped with an experimental 48-core CPU to a handful of lucky researchers sometime by the end of the second quarter. The 48 cores are arranged with multiple connect points in a serial mesh network to transfer data between cores. Each core also has on-chip buffers to instantly exchange data in parallel across all cores. According to Sean Koehl, technology evangelist with Intel Labs, the chip only draws between 25 and 125 watts."
This discussion has been archived. No new comments can be posted.

Intel To Ship 48-Core Test Systems To Researchers

Comments Filter:
  • by toygeek ( 473120 ) on Saturday April 10, 2010 @05:44AM (#31798772) Journal

    Can you imagine a *Beowulf cluster* of these things!? Think about the possibilities!

    • never have a loading bar again!

    • by Anonymous Coward on Saturday April 10, 2010 @06:55AM (#31798920)

      Yes, I can think of the possibilities...

      A Jaguar (or Roadrunner) of these processors would still be too slow to numerically solve the geomechanics problems I grapple with daily though. A Jaguar equipped with these processors would be approximately 20 petaflops peak. To simulate 1 sec of fracture of a 10mm cube of rock on the atomic scale would require of order 10^36 floating point operations. To do that would take 10^20 sec at 10 petaflops. Not bad really...that's only 10^12 years. Oh wait, the universe hasn't even been around that long...

      Having said that I'm a researcher who writes and uses high-performance parallel software daily. How might I become one of Intel's select few to trial these chips? I can certainly think of ways to keep them warm!

      Please Intel please! ;)

      • by MrMr ( 219533 ) on Saturday April 10, 2010 @07:23AM (#31798988)
        fracture of a 10mm cube of rock on the atomic scale
        Ha, I can do that in less than a second, with my serial mallet.
        • Re: (Score:3, Funny)

          fracture of a 10mm cube of rock on the atomic scale
          Ha, I can do that in less than a second, with my serial mallet.

          You left out an important detail: 1 sec of fracture of a 10mm cube of rock on the atomic scale

          Whoa! Chuck Norris has a Slashdot account?

          • Who cares, it's not like he is really THAT good... and no I am not afraid of saying that, he cannot be everywhere at once. Hullo... there is a knock at my door...

        • I can do that is less than a second with my cereal mullet.
      • Re: (Score:3, Funny)

        by urusan ( 1755332 )

        It certainly beats the 10^13 years it would take with a Jaguar!

      • Talking about Jaguars and multiple processors reminds me of a certain console...

      • by WrongSizeGlass ( 838941 ) on Saturday April 10, 2010 @08:18AM (#31799122)

        How might I become one of Intel's select few to trial these chips? I can certainly think of ways to keep them warm!

        Please Intel please! ;)

        Well, posting as AC certainly will help your chances.

      • Re: (Score:2, Insightful)

        by anarche ( 1525323 )

        Having said that I'm a researcher who writes and uses high-performance parallel software daily. How might I become one of Intel's select few to trial these chips? I can certainly think of ways to keep them warm!

        ummm, lets start by not explaining why one of these things won't help your research?

      • Re: (Score:3, Insightful)

        by haruchai ( 17472 )

        Let me save you some time, trouble and hair - your problems are too difficult to solve numerically. Go find something else to do
        unless you plan to live for a very, very long time.

  • Larrabee (Score:5, Interesting)

    by TheKidWho ( 705796 ) on Saturday April 10, 2010 @05:49AM (#31798776)

    I believe this is the remnants of Intel's failed Larrabee chipset which was supposed to compete with Nvidia and ATI.

    A nice article on the story behind Larrabee and it's failure:
    http://www.brightsideofnews.com/news/2009/10/12/an-inconvenient-truth-intel-larrabee-story-revealed.aspx [brightsideofnews.com]

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Indeed. Or, to be more precise, it's their "Bangalore" chip which is basically the same thing as Larrabee without the graphics-specific subunits (texture unit) and perhaps the fancy-pants cache coherency / ring bus architecture.

      • Re:Larrabee (Score:5, Funny)

        by Anonymous Coward on Saturday April 10, 2010 @01:30PM (#31800366)

        That's not a particularly auspicious name for a chip. I'd assume that a "Bangalore" CPU would promise that it could get the work done twice as fast for half as much money due to "parallel architecture" - but you'd launch a program, only to discover that it actually took 10x as long, every instruction needed to be told *exactly* what to do, and the results were so full of errors that it took an additional non-Bangalore CPU working full time just to get things right.

        • by jon3k ( 691256 )
          HAHAHAHAHAHAHA i'm dying over here. apparently the indians got moderator points today.
      • Re: (Score:2, Informative)

        by Spy Hunter ( 317220 )

        No, actually this is a separate effort entirely. This is a product of the same group which produced the "Polaris" 80-core chip, and is meant for research into communication models and memory architectures for massively parallel systems.

        Larrabee is still ongoing as a separate project with a different focus. Larrabee is all about getting maximum throughput by adding a wide vector unit with a whole new instruction set to each x86 core. As far as anyone outside Intel knows, the plan is still to eventually re

    • Re:Larrabee (Score:5, Interesting)

      by jmknsd ( 1184359 ) on Saturday April 10, 2010 @11:43AM (#31799960)

      No, actually It is basically a bunch of Pentium 3s with cache coherency removed for a small chunk of on chip RAM, and a message passing interface for inter core communication. It has alot of interesting features, and is more usable than the 80 core chip they came out with a few years ago.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Yeah. that's what I was thinking too. It suddenly becomes a whole lot less exciting when you consider that it's just a 48 core first generation Pentium rather than a 48 core i7.

  • bullshitter (Score:3, Insightful)

    by Anonymous Coward on Saturday April 10, 2010 @05:53AM (#31798788)

    >>> Sean Koehl, technology evangelist
    Oh... a bullshitter

  • And it runs Linux (Score:3, Informative)

    by Macka ( 9388 ) on Saturday April 10, 2010 @06:10AM (#31798832)

    According to the video they're running Linux on this thing with a custom kernel. No specific details on the changes they had to make to get it running yet.

    • According to the video they're running Linux on this thing with a custom kernel.

      Is that custom-kerneled Linux open-sourced? o0

      • its just for the concept use of it, doubt they will release it

        • Re: (Score:2, Insightful)

          by klingens ( 147173 )

          If they distribute it to the researchers they must release it to the researchers or commit a GPL violation.

          Of course the researchers don't want to demand source since then they won't get freebies like this the next time Intel does such a Santa Claus imitation of distributing presents.

          There's an interesting thought: what happens if you are a beta tester who has to sign a NDA to get something which includes GPL code. What takes precedence? Your NDA, or your right to demand source to the GPL stuff and redistri

          • Re: (Score:3, Informative)

            by solevita ( 967690 )

            or your right to demand source to the GPL stuff and redistribute it publically?

            But this is all just wrong. There's no requirement to make GPL code public, you only need to make it available to the people that receive the binaries. So the researchers will likely be given some source code, but nobody has to release that to the rest of us.

            • So the researchers will likely be given some source code, but nobody has to release that to the rest of us.

              Yes, none of the researchers has to release anything, but at the same time they have the right under the GPL to do so if they wish. So the right under the GPL conflicts with the probable NDA they signed. Can you sign away the rights you have under the GPL?

              • GPL says that if they don't have the right to redistribute for another reason, due to another license, then they don't have the right to redistribute the GPL code, IIRC.

            • by mdwh2 ( 535323 )

              Yes but the researchers can choose to redistribute the binaries, and then any of the recipients also have the right to receive the source.

          • Re: (Score:2, Interesting)

            by mdwh2 ( 535323 )

            http://www.gnu.org/licenses/gpl.html [gnu.org] , section 10:

            Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.

            An entity transaction is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covere

          • Re: (Score:3, Insightful)

            by TheRaven64 ( 641858 )

            What takes precedence? Your NDA, or your right to demand source to the GPL stuff and redistribute it publically?

            It's complicated. The GPL takes priority because the NDA is in violation of it, however the problem is that, by making you sign the NDA, Intel[1] is in violation of the GPL, but you do not have standing to sue them for it. The kernel developers could sue Intel for copyright infringement, by distributing their code without a valid license to do so. Intel could sue you for breach of the NDA. You, however, would have no recourse against Intel if you chose to distribute the code.

            In fact, Intel could dist


          • what happens if you are a beta tester who has to sign a NDA to get something which includes GPL code. What takes precedence?

            The GPL doesn't allow further restrictions to be placed on it. If a company tried such a tactic they'd be in violation of the GPL and the copyright owner could sue. Remember that the GPL also protects the rights of the copyright holder.

          • by mysidia ( 191772 )

            Well, legally, you have to follow them both, as long as they are both enforceable, unless the NDA specifies the software is not covered, or Intel somehow excludes it from the NDA.

            That is impossible, so you can't distribute any part of the software in any way, ever. But a consequence of that is Intel would have violated the GPL by distributing open source software under an NDA.

            Yes, a very clean-cut violation, for the person issuing GPL code under NDA. You are not supposed to be able to do that legally,

  • by youn ( 1516637 ) on Saturday April 10, 2010 @06:33AM (#31798886) Homepage

    maybe that's what bill gates meant when he said 640K should be enough... K as in Core .. it was a spelling mistake;)

    • by selven ( 1556643 ) on Saturday April 10, 2010 @06:57AM (#31798930)

      With the kinds of things Bill Gates did, I don't know if even 640' C would be enough.

    • Or 640 Kelvin?

    • Re: (Score:3, Funny)

      by TeknoHog ( 164938 )
      Too bad multiprocessing did not exist back then, as Intel had yet to invent the Core.
      • Sure about that? Sure that no Mainframe, VAX, Supercomputer had Multiprocessing at the time?

        • Re: (Score:3, Insightful)

          by TeknoHog ( 164938 )
          *whoosh* I wrote my first SMP code in 2001, and it was the typical thing to do in scientific computing, had been for decades. Thus I occasionally like to comment on the recent years' "multicore" marketing phenomenon, where even some developers seem to think they have a completely new problem and they need completely new tools.
          • So your comment was meant sarcastic. Well yes then it all makes sense. It was just that the joke was lost on me.

          • by jon3k ( 691256 )
            SMP doesn't necessarily mean multi-core. And I'd call 6 (Intel) and 12 (AMD) core CPU's officially a "multicore phenomenon".
            • Re: (Score:1, Insightful)

              by Anonymous Coward

              SMP has been around a long time. I remember in 1993 and 1994 that people were trying to do their best to get code working on 4 CPU SGI Indigos.

            • SMP doesn't necessarily mean multi-core.

              That was kind of my point, as I had a dual P3 system way before this "multicore" stuff, and I knew it was not particularly new or special even then.

              • by jon3k ( 691256 )
                I think there was some sarcasm I missed? I'm thinking that was the point of the proper name "Core". It's starting to sink in now ...
        • Re: (Score:2, Funny)

          by Anonymous Coward

          Mainframe, VAX, Supercomputer had Multiprocessing at the time?

          No

          Intel actually developed the first multi-core CPU and multi-processor systems at the behest of Steve Jobs as a condition for migrating OSX to the x86 platform. Further, it is speculated on good authority that Jobs personally headed up a crack engineering team sent to Intel expressly for the purpose of transitioning their fabs from the netburst to the core architecture. Seriously, study and learn.

          Posted anonymously from my iPad at the Starbucks in Cupertino. You know the one.

    • 6 Coors are typically enough, unless you throw a small party, then you'd want 48.
    • no no, he meant Kores

      http://www.kore-usa.com/ [kore-usa.com]

      coz he needed 640 peddle-pushing monkeys to power his plans for power dominion.

    • by RichiH ( 749257 )

      640 k cores, you mean.

    • So, Intel, only 592 more cores to go, then it'll be enough for anybody. Get to work!
    • Re: (Score:1, Funny)

      by Anonymous Coward

      Nah - you'd still need 639 of them to run antivirus.

  • Can you imagine... (Score:4, Insightful)

    by EnsilZah ( 575600 ) <.moc.liamG. .ta. .haZlisnE.> on Saturday April 10, 2010 @06:41AM (#31798900)

    ...a Beowulf cluster of engineers awkwardly reading marketing information from a teleprompter?

  • I hope this is not a marketing ploy. I am more interested in thread management and so can't wait for the benchmark reports; if they are made public.
  • Tilera (Score:4, Informative)

    by loufoque ( 1400831 ) on Saturday April 10, 2010 @07:02AM (#31798942)

    Might as well buy a Tilera if it's for research...
    The only good thing about x86 is that it runs legacy Windows programs, but who cares about that in research?

    • by dbIII ( 701233 )
      There's commercial geophysics programs that only run on Sparc and x86 - and some of them are dropping support for Sparc. The biggest x86 you can throw the software at gets the job done.
      I'm sure there are similar stories in other fields.
      • They're using clusters made of "on-the-shelf" hardware.
        The 48-core Intel CPU is certainly not that.

        • by dbIII ( 701233 )
          No, but it's still very likely that the binaries would be able to run without modification. That's a very big deal when you have no chance of getting anywhere near the source code.
    • The only good thing about x86 is that it runs legacy Windows programs, but who cares about that in research?

      Just because source code is available doesn't mean your problems are over. Even ironing out x86-64 (which millions of people can use) has taken years for the linux distros.

      • Re: (Score:3, Insightful)

        by oakgrove ( 845019 )

        Just because source code is available... Even ironing out x86-64 (which millions of people can use) has taken years for the linux distros.

        Interesting. See, I was running 64 bit distros when Vista was still called Longhorn. I'm also quite sure it was a year or so before XP x64 edition was released. And everything ran great. The only problem I had was flash. Of course, I had the source for everything except for, you guessed it, flash. Imagine that.

  • Correction (Score:5, Informative)

    by Anonymous Coward on Saturday April 10, 2010 @07:09AM (#31798956)

    "Just when you thought your 6-core chip was the fastest processor on the planet, Intel announces plans to ship systems equipped with an experimental 48-core CPU to a handful of lucky researchers sometime by the end of the second quarter.

    Actually, the 8-core (Nehalem EX) and 12-core (Opteron "Magny-Cours") CPUs are already faster than your 6-core CPU. And oddly enough, this 48-core CPU is actually slower than your 6-core, 8-core, or 12-core CPUs. Intel didn't design the 48-core CPU to sell it. They did it as a research project/experiment to develop new ways of interconnecting so many processing cores. While there are technically 48 cores they are far less complex and slower performing than anything that Intel is shipping retail. If you go back a year or two you can find articles where Intel unveiled the CPU and talked about performance. This is simply an exercise in massively parallel CPU design, not an effort to make a faster CPU. That's why they are shipping them to researchers, so they can study and learn how to develop uses for such massively parallel systems.

    • Actually, no one is making faster cores any more. We've hit a large technological roadblock in that area, due largely to heat dissipation issues. The fastest commercially available x86 chips have been a little under 4 GHz for about five years now. Current chip design focuses on heat and power issues and increasing the number of cores on the chip.

      • Re: (Score:3, Insightful)

        by mdwh2 ( 535323 )

        The fastest commercially available x86 chips have been a little under 4 GHz for about five years now.

        Megahertz Myth. As far as I can tell, over the last 5 years individual cores have still been getting faster, just not with higher clock speeds.

        • Megahertz Myth. As far as I can tell, over the last 5 years individual cores have still been getting faster, just not with higher clock speeds.

          They are going faster by shortening the pipeline so you get a shorter execution time if you have branch. Also there are more execution unit. At first there was only 1 alu doing everything. Now in a standard cpu you find multiple dedicated unit for integer, logical and float operation. It allow to execute many operation in parallel, as long as there's no dependency between the data.

          • Wouldn't they get even shorter interconnects between the cores, if the finally go into the third dimension? "Cube" would have a totally new meaning. they just have to solve these heat problems...
        • Re: (Score:3, Interesting)

          by Spatial ( 1235392 )
          Correct.

          Clock frequency is worthless as a measure of CPU performance. Cores have never stopped getting faster.

          For example: Each individual core in a 2.66Ghz i5-750 is more than twice as fast as a 3.8Ghz P4. Often many times faster than that, depending on the workload.
          • by jon3k ( 691256 )
            "Clock frequency is worthless as a measure of CPU performance."

            Well you mean when comparing chips of different architectures, yes.
            • Given that parent was talking about all x86 CPUs made in the last five years, I thought that was implicit.
    • by tius ( 455341 )

      And it won't be long before the "number of cores == speed" myth starts to show it's ugly head. Communications overhead eventually has a major impact on how much computing a processor/core can achieve; i.e. memory, I/O, inter-core. In more classical parallel implementations this is strongly felt around 8 processors/cores. However, today we have the benefit of shrinking high speed packet networks to the system and chip level, so I'm not sure where the knee in the performance curve is. If we're lucky the

    • Re: (Score:3, Interesting)

      by TheRaven64 ( 641858 )

      Intel didn't design the 48-core CPU to sell it

      Actually, they did. Unfortunately, it was delayed and didn't work as well as they'd hoped and it would have been a complete flop in its original target market so they're shipping it as a research toy to try to recoup some of their investment.

      • Re: (Score:2, Interesting)

        by symbolset ( 646467 )
        More likely they've got this widget sitting around with all the requisite engineers raring to go. But it's a wrench that fits no bolt - they need research scientists with the type of problems that this solves to put a load on it, define the scope of its use and put it to work so they can refine the toolchain and broaden the scope.
      • I doubt recouping their investment has anything to do with it; they're not going to be able to sell it to many people, and probably not for a very high price either. They also may have to worry about support costs; they'd probably have been better off canceling it a week before shipping and saving that week's worth of engineering time.

        However, this little experiment (or parts of it, anyway) may end up in future generations of Intel CPUs. They want to get it into researchers' hands now so that they can find

    • ``This is simply an exercise in massively parallel CPU design, not an effort to make a faster CPU. That's why they are shipping them to researchers, so they can study and learn how to develop uses for such massively parallel systems.''

      Perhaps it would be interesting to mention Azul Systems [azulsystems.com] at this point. They sell systems with 108 to 864 cores [azulsystems.com], so they may know a thing or two about "massively parallel".

  • by Anonymous Coward

    There is a typo in the headline.

  • So when analyzing a kernel dump caused by a deadlock with spin locks, I get to look at 48 stack traces, to find out who got what where, and who wants what they will never get?

    Sounds like fun.

    Ok, creative use of LPARs/Virtualization technologies could reduce the headaches. A friend of mine owned an ancient 6-cylinder Jaguar that spent more time in the repair shop than on the road. He was looking a at 12-cylinder also in the shop, when the chief mechanic commented, "You don't want that. A 12-cylinder jus

    • I believe a system like that is supposed to be supported by a hypervisor, which will run just one operating system per core, e.g. Barrelfish [barrelfish.org].
    • Re: (Score:1, Interesting)

      by Anonymous Coward

      Only 48? Don't forget hyperthreading.

  • by LoudMusic ( 199347 ) on Saturday April 10, 2010 @08:03AM (#31799086)

    3DFX, so powerful it's kind of ridiculous.

    http://www.youtube.com/watch?v=DmaYH1F6kho [youtube.com]

    http://www.youtube.com/watch?v=ldiYYJNnQUk [youtube.com]

    http://www.youtube.com/watch?v=o72T8qQr7GE [youtube.com]

    Great ad campaign.

    • 3DFX did deliver great GPU's to the market. When I finally upgraded from the Voodoo3 to the GeForce 4 ti (several generations beyond the voodoo3), that Voodoo *still* kicked its ass in multi-texturing fill rate.

      Make what you want of that, but I say that 3DFX went under because of mismanagement, and not because the company couldn't deliver a great product. The company did in fact deliver a great product, right up to the end.
      • Guess what I found the other day? my 12MB Voodoo2 cards. It's missing the SLI cable, and would propably be destroyed by a modern graphics card anyways (I think my old Radeon 7000 would blow them out of the water, so a 50$ card probably will too)

        I remember playing Quake 2 at ****1024X768**** on a K6-2 (266@337). I wonder if someones still hacks drivers for those cards?
  • by Mr Pippin ( 659094 ) on Saturday April 10, 2010 @09:20AM (#31799354)
    Sounds quite a bit like the INMOS Transputer http://en.wikipedia.org/wiki/Transputer [wikipedia.org]
    Wonder what version of Occam (the programming language) will ship with it?
  • Thread, thread, thread, thread! (Images of Ballmer hopping around). BeOS rocked back in the day because it APPEARED to be faster, because of its pervasive multithreading. Nowadays people are impressed when MS multithreads Win7 to make it more responsive. Imagine what you can do with 48 cores.

  • by Skaven04 ( 449705 ) on Saturday April 10, 2010 @10:48AM (#31799724) Homepage

    AMD's new 12-core "Magny-Cours" Opteron parts will be available in 4P configurations with 48 cores and up to 512GB RAM, so...::yawn::

    • Re: (Score:3, Informative)

      by Skaven04 ( 449705 )

      Actually 768GB RAM...12 dimms per socket (if an OEM chooses to max out the config) with 16GB DDR3 dimms == 768GB.

  • Paul Lily's assertion: "That doesn't mean that you just wasted $1,100 and that your Core i7 980X is suddenly obsolete. As part of a research project, the 48-core part might never become commercially available, and if it did, it would be destined for mainframes and supercomputing tasks, not home desktops." is just plain wrong. Anyone who has been around computer development for any time at all knows that today's supercomputer is tomorrow's desktop. While it may not be this exact CPU, sooner or later 48 core
  • 48 cores (Score:5, Insightful)

    by nurb432 ( 527695 ) on Saturday April 10, 2010 @11:00AM (#31799784) Homepage Journal

    And still one external memory bus.

    • Re: (Score:3, Informative)

      by Anonymous Coward

      Actually, 4 DDR3 memory controllers, each of which can independently talk to a bank of DDR3 memory.

  • Alas, there's no such thing as "instantly", especially in multi-processor core systems. It takes all too long to move data around.
  • Time to get the rumor mill going....
  • by Anonymous Coward

    Intel did not originally design this cpu.

    The 64-core mesh cpu has been sold since 2003 from http://www.tilera.com/products/TILE64.php

    Here is a video of one of the founders who originally designed the mesh cpu.
    http://mitworld.mit.edu/video/671

    They have a 120 core cpu coming out soon.

    Point is, Intel has found a way around their patents to design their own mesh cpu.
    It kind of sickened me to watch Intel take all credit for this mesh design.

  • I am the widow of NGALA MTUR NGILI, ex-director of Intel-Nigeria. When he left intel he managed to stash a FIVE MILLION 48-CORE CHIPS in a box in Switzerland. If you help me get these, you may keep TWO MILLION OF THOSE CHIPS FOR YOURSELF. NIGLIA MTUR NGILI niglia@hotmail.com
  • Finally a CPU that might let Battlefield: Bad Company 2 run at an acceptable framerate.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...