Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Intel Pledges 80 Core Processor in 5 Years 439

ZonkerWilliam writes "Intel has developed an 80 core processor with claims 'that can perform a trillion floating point operations per second.'" From the article: "CEO Paul Otellini held up a silicon wafer with the prototype chips before several thousand attendees at the Intel Developer Forum here on Tuesday. The chips are capable of exchanging data at a terabyte a second, Otellini said during a keynote speech. The company hopes to have these chips ready for commercial production within a five-year window."
This discussion has been archived. No new comments can be posted.

Intel Pledges 80 Core Processor in 5 Years

Comments Filter:
  • by Daniel_Staal ( 609844 ) <DStaal@usa.net> on Tuesday September 26, 2006 @03:18PM (#16204761)
    ...Imagine a Beowolf cluster of those!

    (Runs in shame.)
    • Why does this remind me of an announcement of the Osborne II while standing in front of a full warehouse of Osborne Is?
      • On the other hand, the Osborne II never ran the risk of spontaneously inducing nuclear fusion in ambient atmosphere. (I don't even wanna imagine the heat output of 80 cores, even with the relentless march of technology.)
      • Why does this remind me of an announcement of the Osborne II while standing in front of a full warehouse of Osborne Is?

        Only maniacs are going to wait five years to buy a new computer because of this announcement.
        • by ePhil_One ( 634771 ) on Tuesday September 26, 2006 @03:49PM (#16205441) Journal
          Only maniacs are going to wait five years to buy a new computer because of this announcement.

          Personally I'm going to wait for 2013 when the 160 core CPU's are finally out. Only a fool will be in 5 years

          • by Korin43 ( 881732 ) on Tuesday September 26, 2006 @04:05PM (#16205757) Homepage
            Pfft.. 2003. I'm gonna wait until 2020 when they finally merge them all back into one fast core.
            • I'm gonna wait until 2020 when they finally merge them all back into one fast core.
              They'll call it a "Bose-Wintel condensate". Instructions will be sent to the single core, but there will be no way of distinguishing which of the merged cell cores the instructions was run on. This will play havoc with floating point precision, but as Intel commented, "most users don't need that kind of precision anyway".

              The condensated core will also be subject to the laws of quantum mechanics in that, before a program has finished running, there will be no way to know if it will crash or not. Microsoft plans to leverage this to further stablise their latest version of Windows. Security experts worried about the onboard "Quantum-Threading" technology redirecting portions of thread output randomly to other threads, were dismissed as not being "forward looking".

              Meanwhile, AMDs new 1W, 128 core, 4098bit chip with 1GB L2 cache retails for almost 50% higher than Intel's Bose-Wintel chips, and has seen sluggish sales since the arrival of the new technology, despite its lower running cost that the 5MW Intel chip. When asked for comment, AMD's spokesman added; "Ch@#&t!! What the f**k is wrong with you people!??! Our chips save you money!! F@#*&^g cheapskates!!!"

              Upon hearing the news, Linux founder and lead developer Linus Torvalds(51) said: "We're not rewriting the kernel for that monstrosity." Intel representative declared that the company was "dissapointed" in Torvald's remarks. Apple cofounder Steve Jobs(65), when asked whether Apple intended to release a the new Mac based on the chipset, declined to comment as he went about his daily 5km morning run. Apple pundits widely believe that the new Mac will run on a quad core Bose-Wintel Condensate, and to complement this will sport a blazing white, ultra smooth case made out of Bose-Einstien condensate, the fifth phase of matter.

              In a related story, Microsoft cofounder Bill Gates(65), assaulted a technology reporter at a company press conference disccusing the new chip. Details are sketchy, but reports mention that one of Mr Gates older quotes about appropriate amounts of computer memory was brought up. Remond police have declined to comment on the case.
        • Actually, ten years... five years for it to come out and another five years before the price is right so Joe "Pimp My PC" Blow can afford it.
    • Re: (Score:2, Funny)

      by diersing ( 679767 )
      Finally, a platform that will run Vista.... RTM Bill, RTM!
    • by NiceRoundNumber ( 1000004 ) on Tuesday September 26, 2006 @03:38PM (#16205225)
      ...Imagine a Beowolf cluster of those!

      I never petaflop I didn't like.
  • OH SHI- (Score:2, Funny)

    by Anonymous Coward
    But it still can't tell me 1/0.
  • Exchanging data (data transfer) is not the same thing as operations per second. The post seems to either be confusing the two or stating that the chip does both. I guess I need to go read the article now and find out...
    • Re:Hey now... (Score:5, Insightful)

      by myurr ( 468709 ) on Tuesday September 26, 2006 @03:21PM (#16204837)
      Why oh why won't Intel spend their research dollars on something useful, like a bus architecture that can actually keep up with present performance levels?
      • Re: (Score:3, Insightful)

        by Scarblac ( 122480 )

        Why oh why won't Intel spend their research dollars on something useful, like a bus architecture that can actually keep up with present performance levels?

        Yes, because if Intel is working on one thing, that means they can't work on anything else at all anymore...

      • Re: (Score:3, Informative)

        by Wavicle ( 181176 )
        Your wish has been granted [theregister.co.uk].

        Next!
        • Re: (Score:2, Funny)

          by myurr ( 468709 )
          Wicked... hmmm.... a castle full of nubile virgins all asking me to spank them?
      • Re: (Score:3, Funny)

        by Rik Sweeney ( 471717 )
        like a bus architecture that can actually keep up with present performance levels?

        There's nothing wrong with bus architecture in my opinion.

        I stand at the stop, the digital sign says the bus will be along in 4 minutes.

        4 minutes later the bus turns up.

        I don't see what the problem is.
      • by snuf23 ( 182335 )
        From the FA:

        "In order to move data in between individual cores and into memory, the company plans to use an on-chip interconnect fabric and stacked SRAM (static RAM) chips attached directly to the bottom of the chip, he said."
    • And now that I have read the article there still doesn't seem to be any clarification. If I had to bet it would be on the data transfer speed and not the ops/sec.
      • Re: (Score:2, Informative)

        Otellini meant both flops and memory xfer rate.

        Clarifiation from TFA:
        "But the ultimate goal, as envisioned by Intel's terascale research prototype, is to enable a trillion floating-point operations per second--a teraflop--on a single chip."

        Further clarification from TFA:
        "Connecting chips directly to each other through tiny wires is called Through Silicon Vias, which Intel discussed in 2005. TSV will give the chip an aggregate memory bandwidth of 1 terabyte per second."
  • So... (Score:3, Funny)

    by dark_15 ( 962590 ) on Tuesday September 26, 2006 @03:19PM (#16204785)
    This will finally run Vista, right??? Maybe? Hopefully?
    • Re:So... (Score:5, Funny)

      by kfg ( 145172 ) * on Tuesday September 26, 2006 @03:33PM (#16205113)
      This will finally run Vista, right?

      And get here ahead of it, so we'll be ready.

      KFG
    • This will finally run Vista, right???

      God, I hope not. I can guarentee that mine won't.
  • by DeathPenguin ( 449875 ) * on Tuesday September 26, 2006 @03:20PM (#16204809)
    Unfortunately, they'll all choke on a shared memory bus :-)
  • Just as they,,,, (Score:5, Insightful)

    by klingens ( 147173 ) on Tuesday September 26, 2006 @03:21PM (#16204835)
    promised us 8-10Ghz Pentium4 CPUs when they started with the P4 "Willamette"? Or how they promised us 5GHz Prescotts?

    I'll rather wait and see what I can actually buy in 5 years. No need to trust a vendor so far in the future what they can do.
  • by hsmith ( 818216 ) on Tuesday September 26, 2006 @03:23PM (#16204863)
    Faster processors are great, but when will we see massive improvements in data storage...
  • PAIIINNN (Score:5, Insightful)

    by Tester ( 591 ) <.olivier.crete. .at. .ocrete.ca.> on Tuesday September 26, 2006 @03:23PM (#16204865) Homepage
    Imagine the pain of having to write a functional applications with so many cores. I hope the interconnect will be very very fast. Otherwise writing massively scalable parallel algorithms will be masssively painful. And with so many cores, one will need multiple independants memory banks with some kind of NUMA. And writing apps for those things isn't fun. You have to spend so much time caring about the parallel stuff instead of caring about the problem.
    • Not really, as long as you pick the right tools for the job. Writing code for such a machine using a threaded model would obviously be stupid. Writing it in a asynchronous CSP-based language like Erlang is much easier. There's a language I saw a presentation from some guys at IBM on that looks potentially even more promising, although I can't recall its name at the moment.

      As with anything else in the last 10 years, if you try to pretend you're still writing code for a PDP-11, you'll have problems. If y

  • That's hot!

    (/ducks)
  • Power Consumption (Score:2, Insightful)

    by necrodeep ( 96704 ) *
    I seriously hope that power consumption and heat disipation are really attacked before these things come out. Can you imagine needing a 200-amp service and liquid nitrogen cooling for something like that right now?
  • by Lost+Found ( 844289 ) on Tuesday September 26, 2006 @03:24PM (#16204895)
    This is hilarious, because if this goes out on the market there's not going to be many operating systems capable of scheduling on that many chips usefully. OS X can't do it, Windows can't do it, and nor can BSD. But Linux has been scheduling on systems with up to 1,024 processors already :)

    • Wow, good point. I bet Intel never once stopped to think about THAT.

      I sincerely doubt this will make it anywhere near Fry's or CompUSA, assuming it launches in +5 years. Most likely academic, corporate (think of the old days and mainframe number crunchers on wallstreet), and scientific.

      Simply cheap teraflops for custom applications.

      Of course, everyone thought it was a great idea when Cell announced they could do 64 or more cores. But since this is /. versus Intel, everything has to be a joke, right?
    • Scheduling isn't a one size fits all process. What works at 4 cores doesn't work at 40 and so on. As for other operating systems, FreeBSD has been working quite actively on getting Niagras working well with their sparc64 port. I've been saying it didn't make sense until this announcement. I figured we'd have no more than 8 cores in 5 years. We'll see what really happens.

      The BSD projects, Apple and Microsoft have five years. Microsoft announced awhile back they want to work on supercomputing versions of windows. Perhaps they will have something by then. Apple and Intel are bed partners now. I'm sure intel will help them.

      What this announcement really means is that computer programmers must learn how to break up problems more effectively to take advantage of threading. Computer science programs need to start teaching this shit. A quick you can do it, go get a master's degree to learn more isn't going to cut it anymore. There's no going back now.
      • Re: (Score:3, Interesting)

        In other words, get out your functional languages like Haskell and OCaml and use the side-effect free feature set to develop multi-threaded programs. Or do it the hard way with an OO language.
    • by $RANDOMLUSER ( 804576 ) on Tuesday September 26, 2006 @04:16PM (#16205949)
      From TFA:
      Intel's prototype uses 80 floating-point cores, each running at 3.16GHz, said Justin Rattner, Intel's chief technology officer, in a speech following Otellini's address. In order to move data in between individual cores and into memory, the company plans to use an on-chip interconnect fabric and stacked SRAM (static RAM) chips attached directly to the bottom of the chip, he said.
      So think more like Cell with 80 SPEs. Great for lots of vector processing.
  • Shame BeOS Died... (Score:5, Informative)

    by Rhys ( 96510 ) on Tuesday September 26, 2006 @03:24PM (#16204911)
    With the heavily threaded nature of BeOS, even demanding apps would really fly on the quad+ core cpus that are preparing to take over the world.

    Not that you couldn't do threading right in Windows, OS X, or Linux. But BeOS made it practically mandatory: each window was a new thread, as well as an application-level thread. Plus any others you wanted to create. So to make a crappy application that locks up when it is trying to do something (like update the state of 500+ nodes in a list; ARD3 I'm looking at you) actually took skill and dedication. The default state tended to be applications that wouldn't lockup while they worked, which is really nice.
    • Re: (Score:3, Funny)

      by mindsuck ( 607395 )
      This 80-core processor would probably also benefit from the is_computer_on_fire() [uiuc.edu] syscall available on BeOS.
  • by SevenHands ( 984677 ) on Tuesday September 26, 2006 @03:26PM (#16204965)
    In other news, Gillette pledges a razor with 81 micro blades. 80 blades are individually controlled via Intel's new 80 core processor. The 81st blade is available just because..
    • by hey ( 83763 )
      Good point. Can we coin SevenHand's law: the number of Gillette blades increases at the same rate as Intel cores.
    • No no no - while Moore's law is merely exponential, Gillette's razor blades are on a hyperbolic curve. They'll go to infinity by 2015. The Economist said so! [livejournal.com].
  • Just in time for Vista!
  • Today, a 2 CPU x 2core computer can actually be slower than a 2x1 or 1x2 core for certain "cherry picked to be hard" operations due to the OS making incorrect assumptions about things like shared/unshared cache - 2 cores on the same chip may share cache, two on different chips may not - and other issues related to the fact that not all cores are equal as seen by the other cores.

    In an 80-core environment, there will likely be inequalities due to chip-real-estate issues and other considerations. The question
    • by Ant P. ( 974313 )
      The Linux kernel can be made to use a different scheduler for hyperthreading, so maybe the same idea can be used for more complicated setups.
  • It will take another 15-20 years for software to catch up.
    • by joe 155 ( 937621 )
      why is that unfortunate? software and hardware have always run at pretty much the same pace, but I would rather have an 80 core processor which I can keep for 10 years and update my OS to take advantage of more of the cores as time goes by than have to buy a whole new system every 3 years at least.
  • by GreatBunzinni ( 642500 ) on Tuesday September 26, 2006 @03:38PM (#16205221)
    640 cores ought to be enough for everybody
  • Really if your read the story it is 80 floating point cores! It would be be ideal for many graphics, simulation, and general DSP jobs.
    What it isn't is 80 CPU cores.
    Really interesting research but not likely to show up in your PC anytime soon.
    With all these multi core chips I am waiting for more specialized cores to start being included on the die.
    After all a standard Notebook will have a CPU, GPU and usually a DSP for WiFI. Seems like linking them all with something like AMDs Hyper transport could offer so
  • ... not for the high price Gasse wanted for it, but for what 3COM got it for. They need that pervasive multi-threading now more than ever. NEXT was good and all, but are they really going to be able to backwardly refine the whole bit? Oh well, at least they've got plenty of old BeOS employees. The pervasive beach-balls however make me wonder what they're doing all day, new kernel?
  • ...in search of a problem. But don't you worry, those problems will come along soon.
  • by maynard ( 3337 ) on Tuesday September 26, 2006 @03:48PM (#16205415) Journal
    not 80 general purpose integer cores. They're essentially copying the Cell design with large numbers of DSPs each of which has a local store RAM burned onto the main chip. Is this a good idea? Guess we'll find out with the Cell. What interests me most about this announcement is not the computing potential from such a strategy, but that it's an obvious response to IBM and Sony technology.
  • by Henry V .009 ( 518000 ) on Tuesday September 26, 2006 @03:49PM (#16205419) Journal
    You fools! Do you have any clue how much Oracle licenses will cost for this thing?
  • by stonewolf ( 234392 ) on Tuesday September 26, 2006 @03:51PM (#16205475) Homepage
    A couple of things to mention here. Many years ago I read an Intel road map for the x86 processors. It was more than 10 years ago, less than 20 I think. In it they said they would have massively multicore processors coming along around now. They may have forgotten that and reinvented the goal along the way, companies do that. But, they really have been predicting this for a very long time.

    The other thing is that with that many cores and all the SIMD and graphics instructions that are built into current processors it looks to me like the obvious reason to have 80 cores is to get rid of graphics coprocessors. You do no need a GPU and a bunch of shaders if you can throw 60 processors at the job. You do need a really good bus, but hey, not much of a problem compared to getting 80 cores working on one chip.

    With that kind of computer power you can throw a core at any thing you currently use a special chip for. You can get rid of sound cards, network cards, graphics cards... all you need is lots of cores, lots of RAM, a fast interconnect, and some interface logic. Everything else is just a waste of silicon.

    History has shown that general purpose processing always wins in the end.

    I was talking to some folks about this just last Saturday. They didn't beleive me. I don't expect y'all to believe me either. :-) The counter example everyone came up with was, "well, if that is true why would AMD buy ATI?" The answer to that is simple, they want their patent portfolio and their name. In the short term it even makes sense to put a GPU and some shaders on a chip along with a few cores. At the point you can put 16 or so cores on a chip you won't have much use for a GPU.

    Stonewolf
    • by sp3d2orbit ( 81173 ) on Tuesday September 26, 2006 @04:55PM (#16206647)
      I remember doing a project in college where we had to implement a 8 point FFT in software and hardware. I was eye-opening. The hardware implementation ran on a FPGA that had something like a 23Mhz clock. The software solution was a C program running on a 2Ghz desktop. 23 Mhz vs. 2 Ghz. The hardware solution was more than 10X faster.

      I don't think that general purpose processors will ever completely replace special purpose hardware. There is simply too much to be gained by implementing certain features directly on the chip.

  • by MobyDisk ( 75490 ) on Tuesday September 26, 2006 @03:51PM (#16205491) Homepage
    This is the last 3 years of Intel, all over again. Only now the megahertz race is replaced with the multi-core race.

    Intel will create the "CoreScale" technology and make 4, then 8, then 16 cores and up while their competitors are increasing operations per clock cycle per watt per core. Consumers won't know any better, so they will buy the Intel 64-core processor that runs hotter and slower than the cheaper clone chip that has only 8 cores. Then when Intel starts runs up against a wall and gets their butt-kicked they will revert to the original Core 2 Duo design and start competing again.

    Oh, and I predict that AMD will release a new rating called the "core plus rating" so their CPUs will be an Athlon Core 50+ meaning it has the equivalent of 50 cores. Queue n00bs who realize they have only 8 cores and complain.

    And to think I didn't like history in school. Maybe I just hadn't seen enough of it to understand.
  • I'll be called "Core Lots Quad-Duodeca"

  • by lonesometrainer ( 138112 ) <vanlil&yahoo,com> on Tuesday September 26, 2006 @03:54PM (#16205543)
    Software hasn't really improved for maaany years now, Spreadsheets and Word Processors are more colourful, higher resolution. But are these products smarter, better at all? Would a postgraduate write a better doctoral thesis with Office 2007 than with - say - Word 6.0? Is image manipulation thaat much better with the latest photoshop than with PS 5.5? With some minor exceptions the answer is clearly no.

    - We were promised Virtual Reality with VR Helmets more than 10 years ago - is this _just_ a matter of hardware?
    - Smart voice recognition? Anyone tried it lately? Anyone tried to write pretty standard letters with it? Desastrous.
    - Intelligent assistents, understanding the user's needs? Operating system/application wizards that improve it's capabilties while you're working with 'em?

    The applications are missing, they're faster, more colourful, higher resolution, antialiased... but still DUMB.

    Computers are already pretty powerful, please start and make the software smarter, not faster.

    CPU power is not that important anymore.
    • by ultramk ( 470198 ) <ultramkNO@SPAMpacbell.net> on Tuesday September 26, 2006 @04:30PM (#16206205)
      Is image manipulation thaat much better with the latest photoshop than with PS 5.5? With some minor exceptions the answer is clearly no.

      Hah! I am forced to disagree in the strongest possible terms..

      Speaking as a former production artist and current art director, the last couple of generations of graphics software have introduced powerful tools that streamline my workflow in ways I find it hard to even fathom. Ok, let's talk about Illustrator, for example. From 10 -> CS Adobe added in-application 3D rendering of any 2D artwork onto geometric primitives. This is something I used to either have to fake, or take out of the application and into a 3D renderer in order to render simple bottle/can/box packaging proofs. Marketing wants to make a copy change? Make the change to the 2D art and the 3D rendering is updated in real time. Oh, and the new version of InDesign recognizes that the art has been updated and reloads it into the brochure layout. Automatically.

      This is just one feature out of literally hundreds. This one alone saves me an hour or two a day. Seriously, there are projects I can take on today that would have been unthinkable 5 years ago. Pre-press for a 700 page illustrated book project has gone from a week of painful, tedious work down to 30 minutes, of which 20 is letting the PDF render. Seriously.

      Here's the thing, unless you use a piece of software all day, every day, you're really not in any position to comment on how much it has or hasn't changed.

      Photoshop (et. al.) are software for professionals, despite the number of dilettantes out there using them for sprucing up their MySpace page.

      m-
  • by loconet ( 415875 ) on Tuesday September 26, 2006 @03:57PM (#16205615) Homepage
    WASHINGTON (Reuters) In lights of Intel's 80 Core Processor pledge in 5 Years, scientists are worried that Richard Branson's pledge [bbc.co.uk] is now too little too late.
  • by Animats ( 122034 ) on Tuesday September 26, 2006 @04:05PM (#16205751) Homepage

    The big question is how these processors interconnect. Cached shared memory probably won't scale up that high. An SGI study years ago indicated that 20 CPUs was roughly the upper limit before the cache synchronization load became the bottleneck. That number changes somewhat with the hardware technology, but a workable 80-way shared-memory machine seems unlikely.

    There are many alternatives to shared memory, and most of them, historically, are duds. The usual idea is to provide some kind of memory copy function between processors. The IBM Cell is the latest incarnation of this idea, but it has a long and disappointing history, going back to the nCube, the BBN Butterfly, and even the ILLIAC IV from the 1960s. Most of these, including the Cell, suffered from not having enough memory per processor.

    Historically, shared-memory multiprocessors work, and loosely coupled network based clusters work. But nothing in between has ever been notably successful.

    One big problem has typically been that the protection hardware in non-shared-memory multiprocessors hasn't been well worked out. The Infiniband people are starting to think about this. They have a system for setting up one way queues between machines in such a way that appliations can queue data for another machine without going through the OS, yet while retaining memory protection. That's a good idea. It's not well integrated into the CPU architecture, because it's an add-on as an I/O device. But it's a start.

    You need two basic facilities in a non-shared memory multiprocessor - the ability to make a synchronous call (like a subroutine call) to another processor, and the ability to queue bulk data in a one-way fashion. (Yes, you can build one from the other, but there's a major performance hit if you do. You need good support for both.) These are the same facilities one has for interprocess communication in operating systems that support it well. (QNX probably leads in this; Minix 3 could get there. If you have to implement this, look at how QNX does it, and learn why it was so slow in Mach.)

    • They are claiming a terabyte per second interconnect. I think it is safe to assume it will be something like an Infiniband, myrinet or similar (NEC's IXS, IBM's HPS) high performance application networking technology.

      What you're asking for is pretty standard stuff in the high end, where hundreds of processors is quite common. Cache coherency is a killer, and so they have died out long ago in the high end. when you think about it, CC basically requires a crossbar switch style memory archictecture which ex

FORTUNE'S FUN FACTS TO KNOW AND TELL: A black panther is really a leopard that has a solid black coat rather then a spotted one.

Working...