Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Tilera To Release 100-Core Processor 191

angry tapir writes "Tilera has announced new general-purpose CPUs, including a 100-core chip. The two-year-old startup's Tile-GX series of chips are targeted at servers and appliances that execute Web-related functions such as indexing, Web search and video search. The Gx100 100-core chip will draw close to 55 watts of power at maximum performance."
This discussion has been archived. No new comments can be posted.

Tilera To Release 100-Core Processor

Comments Filter:
  • This is great ! (Score:5, Interesting)

    by ls671 ( 1122017 ) * on Monday October 26, 2009 @04:27AM (#29870063) Homepage

    I can't wait to see the output of :

    cat /proc/cpuinfo

    I guess we will need to use:

    cat /proc/cpuinfo | less

    When we reach 1 million cores, we will need to rearrange the output of cat /proc/cpuinfo to eliminate redundant information ;-))

    By the way I just typed "make menuconfig" and it wiil let you enter a number up to 512 in the "Maximum number of CPUs" field, so the Linux kernel seems ready for up to 512 CPUs (or cores, they are handled the same way by Linux it seems) as far I can tell by this simple test. Entering a number greater than 512 gives the "You have made an invalid entry" message ;-(

    Note: You need to turn on "Support for big SMP systems with more than 8 CPUs" flag as well.

     

  • by LaurensVH ( 1079801 ) <(lvh) (at) (laurensvh.be)> on Monday October 26, 2009 @04:36AM (#29870107)
    It appears from the article that it's a new, separate architecture to which the kernel hasn't been ported yet, so these are add-on processors that can help reduce the load on the actual CPU, at least for now. So, em, two things: 1. How exactly does that work without kernel level support? They claimed having ported separate apps (MySQL, memcached, Apache), so this might suggest a generic kernel interface and userspace scheduling. 2. How does this fix the apps they ported being mostly IO bound in a lot of cases and 99% of the cores will still just be eating out of their noses?
  • by broken_chaos ( 1188549 ) on Monday October 26, 2009 @04:40AM (#29870125)

    How does this fix the apps they ported being mostly IO bound in a lot of cases and 99% of the cores will still just be eating out of their noses?

    Loads and loads of RAM/cache, possibly?

  • FreeBSD and GCD (Score:3, Interesting)

    by MacTechnic ( 40042 ) on Monday October 26, 2009 @05:44AM (#29870407) Homepage

    Although I don't expect Apple to release an Apple Server edition with a Tilera multicore processor, I would be interested to see a version of FreeBSD running with Grand Central Dispatch on a Tilera multicore chip. It would give a good idea of how effective GCD would be in allocating cores for execution. Any machine with 100 cores must have a considerable amount of RAM, perhap 8GB+, even with large caches.

    Apple has been very active in developing LLVM compilers, and has recently added CLANG front end, instead of GCC. I don't think apple has open sourced all their work yet, but check llvm.org for the current details. The real trick is breaking any algorithm into blocks. Using OpenCL to organize your code for execution. I mean how different is a 100 core multi-CPU chip from a multicore GPU accellerator!

  • by Anonymous Coward on Monday October 26, 2009 @06:56AM (#29870661)

    OK, so big disclaimer: I work for Sun (not Oracle, yet!)

    The Sun Niagara T1 chip came out over 3 years ago, and it did 32 threads on 8 cores.
    And drew something around 50W (200W for a fully-loaded server). And under $4k.

    The T2 systems came out last year, do 64 threads/CPU for a similar power budget. And even less $/thread.

    The T3 systems likely will be out next year (I don't know specifically when, I'm not In The Know), and the threads/chip should double again, with little power increase.

    Of course, per-thread performance isn't equal to anything like a modern "standard" CPU. Though, it's now "good enough" for most stuff - the T2 systems have a per-thread performance equal to about the old Pentium3 chips. I would be flabbergasted if this GX chip had a per-core performance anywhere near that.

    I'm not sure how Intel's Larabee is going to show (it's still nowhere near release), but the T-seres chips from Sun are cheap, open, and available now. And they run Solaris AND Linux. So unless this new GX chip is radically more efficient/higher-performance/less costly, I don't see this company making any impact.

    -Erik

  • by kannibul ( 534777 ) on Monday October 26, 2009 @09:42AM (#29871669)
    For some reason, I read this article and immediately thought about a 15-bladed hsaving razor... My point being that 100 cores, while it sounds impressive, you get a diminished return after a few cores. Even if software was written for multi-core use (and not enough of it is, IMO), you still can't possibly, effectively, use 100 cores...not before this processor is already extinct due to technological progress. Even my quad core Intel CPU, hardly uses all 4 cores...and most commonly hits CPU1 for processes.
  • Re:Yep (Score:3, Interesting)

    by afidel ( 530433 ) on Monday October 26, 2009 @10:04AM (#29871919)
    10Gb ethernet is fairly low latency and obviously has plenty of bandwidth, using remoteDMA you can get pretty damn good results. Obviously if latency is your #1 performance blocker then it's not going to produce the fastest results, but you can still get good results out of a fairly inexpensive cluster using 10Gb fat trees for most workloads. Basically commodity computing technology has shrunk the gap between what can be done on a moderate sized commodity cluster and what can be done on a purpose built supercomputer, the result being what has happened to Cray and SGI.
  • Re:This is great ! (Score:4, Interesting)

    by tixxit ( 1107127 ) on Monday October 26, 2009 @10:30AM (#29872231)
    There is actually an entire model of computation dedicated to parallel computation: the PRAM model [wikipedia.org]. Lots of nifty algorithms have already been designed for the PRAM model of computation (O(log n) sorting, for instance). What's even cooler is that some of these algorithms have given insights that have then been used to provide speed ups in the RAM model (eg. read Megiddo's "Applying Parallel Computation Algorithms in the Design of Serial Algorithms" [acm.org]).
  • asymmetric (Score:3, Interesting)

    by TheSHAD0W ( 258774 ) on Monday October 26, 2009 @10:49AM (#29872491) Homepage

    It's been reported that these cores will be relatively underpowered, though both the total processing power and cost per watt will be quite impressive. This makes the chip appropriate for putting in a server but not so much a desktop machine, where CPU-intensive single-threads may bog things down.

    So what about one of these in combination with a 2-, 3- or 4-core AMD/Intel chip? The serious threads can be run on the faster chip, while all the background stuff can be spread among the slower cores? Does Windows have the ability to prioritize like that? Does Linux?

"And remember: Evil will always prevail, because Good is dumb." -- Spaceballs

Working...