Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Tilera To Release 100-Core Processor 191

angry tapir writes "Tilera has announced new general-purpose CPUs, including a 100-core chip. The two-year-old startup's Tile-GX series of chips are targeted at servers and appliances that execute Web-related functions such as indexing, Web search and video search. The Gx100 100-core chip will draw close to 55 watts of power at maximum performance."
This discussion has been archived. No new comments can be posted.

Tilera To Release 100-Core Processor

Comments Filter:
  • by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Monday October 26, 2009 @04:35AM (#29870097)

    Yes, I suppose technically any FPGA could be considered a "core" in its own right, but it's a far cry from the CPU cores that you typically associate with the term.

    Putting a stock on a semi-automatic rifle makes it an "assault weapon", but c'mon. It's still a pea shooter.

  • Custom ISA? (Score:5, Insightful)

    by Henriok ( 6762 ) on Monday October 26, 2009 @04:37AM (#29870111)
    Massive amounts or cores are cool and all that, but if the instruction set isn't any standard type (ie x86, Sparc, ARM, PowerPC or MIPS) chances are that it won't see light outside highly customized applications. Sure, Linux will probably run it. Linux run on anything, but it won't be put in a regular computer other than as an accelerator of some sort, like GPUs which are massively multicore too. Intel's Larrabee though..
  • Re:This is great ! (Score:5, Insightful)

    by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Monday October 26, 2009 @04:46AM (#29870149)

    By the way I just typed "make menuconfig" and it wiil let you enter a number up to 512 in the "Maximum number of CPUs" field, so the Linux kernel seems ready for up to 512 CPUs (or cores, they are handled the same way by Linux it seems) as far I can tell by this simple test. Entering a number greater than 512 gives the "You have made an invalid entry" message

    Whoa. If you change the source a little, you can enter 1000000 into the Maximum number of CPUs field! Linux is ready for up to a million cores.

    If you change the code a little more, when I enter a number that's too high for menuconfig, it says "We're not talking about your penis size, Holmes"

  • 100? (Score:3, Insightful)

    by nmg196 ( 184961 ) on Monday October 26, 2009 @05:10AM (#29870247)

    Wouldn't it have been better to make it a power of 2? Some work is more easily divided when you can just keep halving it. 64 or 128 would have been more logical I would have thought. I'm not an SMP programmer thought, so perhaps it doesn't make any difference.

  • Re:obligatory (Score:3, Insightful)

    by fractoid ( 1076465 ) on Monday October 26, 2009 @05:10AM (#29870253) Homepage

    It IS a Beowulf cluster.

    Obligatory Princess Bride quote:
    Miracle Max: Go away or I'll call the brute squad!
    Fezzik: I'm ON the brute squad.
    Miracle Max: [opens door] You ARE the brute squad!

  • Re:Custom ISA? (Score:4, Insightful)

    by complete loony ( 663508 ) <Jeremy.Lakeman@g ... .com minus punct> on Monday October 26, 2009 @05:20AM (#29870297)

    1. LLVM backend
    2. Grand central
    3. ???
    4. Done.

    Seriously though, this is exactly what Apple have been working towards recently in the compiler space. You write your application and explicitly break up the algorythm into little tasks that can be executed in parallel. Using a syntax that is light weight and expressive. Then your compiler tool chain and runtime JIT manages the runtime threads and determines which processor is best equipped to run each task. It might run on the normal CPU, or it might run on the graphics card.

  • Re:This is great ! (Score:3, Insightful)

    by am 2k ( 217885 ) on Monday October 26, 2009 @05:30AM (#29870339) Homepage

    Actually, some algorithms (like fluid simulation and a very large neural net) are not that hard to parallelize to run on a million cores.

  • Re:This is great ! (Score:3, Insightful)

    by dkf ( 304284 ) <donal.k.fellows@manchester.ac.uk> on Monday October 26, 2009 @05:48AM (#29870431) Homepage

    Actually, some algorithms (like fluid simulation and a very large neural net) are not that hard to parallelize to run on a million cores.

    Building the memory backplane and communication system (assuming you're going for a cluster) to support a million CPUs is non-trivial. Without those, you'll go faster with fewer CPUs. That's why supercomputers are expensive (it's not in the processors, but in the rest of the infrastructure to support them).

  • It would be clever (Score:3, Insightful)

    by rbanffy ( 584143 ) on Monday October 26, 2009 @07:22AM (#29870763) Homepage Journal

    Since a) developing a processor is insanely expensive and b) they need it to run lots of software ASAP, it would be very clever if they spent a marginal part of the overall development costs in making sure every key Linux and *BSD kernel developer gets some hardware they can use to port the stuff over. Make it a nice desktop workstation with cool graphics and it will happen even faster.

    They are going up against Intel... The traditional approach (delivering a faster processor with a better power consumption at a lower price) simply will not work here.

    I think Movidis taught us a lesson a couple years back. Users will not move away from x86 for anything less than a spectacular improvement. Even the Niagara SPARC servers are a hard sell these days...

  • Re:This is great ! (Score:5, Insightful)

    by TheRaven64 ( 641858 ) on Monday October 26, 2009 @08:11AM (#29870975) Journal
    And this is one of the reasons why Linux is such a pain to program for. If you actually want any of this information from a program, you need to parse /proc/cpuinfo. Unfortunately, every architecture decides to format this file differently, so porting from Linux/x86 to Linux/PowerPC or Linux/ARM requires you to rewrite this parser. Contrast this with *BSD, where the same information is available in sysctls, so you just fire off the one that you want (three lines of code), don't need a parser, and can use the same code on all supported architectures. For fun, try writing code that will get the current power status or number and speed of the CPUs. I've done that, and the total code for supporting NetBSD, OpenBSD, FreeBSD and Solaris on all of their supported architectures was less than the code for supporting Linux/x86 (and doesn't work on Linux/PowerPC).
  • by cpghost ( 719344 ) on Monday October 26, 2009 @10:07AM (#29871949) Homepage

    My point being that 100 cores, while it sounds impressive, you get a diminished return after a few cores.

    Yes, indeed. The memory bus is usually the bottleneck here... unless you switch from SMP to NUMA architecture, which seems necessary for anything with more than, say, 8 to 16 cores.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...