Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

Grid Processing 130

c1ay writes "We've all heard the new buzzword, "grid computing" quite a bit in the news recently. Now the EE Times reports that a team of computer architects at the University of Texas here plans to develop prototypes of an adaptive, gridlike processor that exploits instruction-level parallelism. The prototypes will include four Trips(Tera-op Reliable Intelligently Adaptive Processing System) processors, each containing 16 execution units laid out in a 4 x 4 grid. By the end of the decade, when 32-nanometer process technology is available, the goal is to have tens of processing units on a single die, delivering more than 1 trillion operations per second. In an age where clusters are becoming more prevalent for parallel computing I've often wondered where the parallel processor was. How about you?"
This discussion has been archived. No new comments can be posted.

Grid Processing

Comments Filter:
  • by rhetland ( 259464 ) on Monday September 15, 2003 @09:08AM (#6962991)

    I use parallel computing on a cluster, in which I divide up my computational domain into a number of chunks, and each chunk is farmed out to a processor. Communication between the processes is required at the chunk boundaries.

    For this case, I see how my code is partitioned, and I also understand (on a general level, at least) what the limitations on speed are: information based between the chunks.

    Now, how will this processor do its 'instruction level' parallelization? Will it be great at do loops (one 'do' per processer)? Will it be like a mini vector processor? What will break down the efficiency of the parallelization?

    I have found that efficiency in parallelization is very application dependent after about 8-32 procesors. Will this break that barrier?

    Most importantly, will it kick butt for MY applications?
  • by binaryDigit ( 557647 ) on Monday September 15, 2003 @09:12AM (#6963011)
    Forgive me if I'm off base here, but perhaps a proccie nerd can explain the differences between this design and say VLIW. They seem closely related, breaking the app into parallelizable chunks and sending them to n execution units. The article doesn't mention if the trips processing nodes can 'talk' to each other. If they can't, then this seems very similar in concept to vliw (if not different in physical and logical layout).
  • by ChrisRijk ( 1818 ) on Monday September 15, 2003 @09:32AM (#6963135)
    If even with one CPU core, if your system is main memory bandwidth limited (or mostly), then extra cores won't help (much). So this kind of design looks good only for non bandwidth limited tasks, which is a much smaller market.

    They don't seem to be considering business servers here, but they are more main memory latency limited than bandwidth limited, so multiple cores can help a lot. But you need more than simply lots of cores to have a good design. A critical thing to have is major software support which means using an existing ISA, not a new one.

    So I'd expect this to be quite an obscure product in reality.
  • by rockmuelle ( 575982 ) on Monday September 15, 2003 @09:39AM (#6963205)
    Scientific and financial computing, especially modelling and simulation, are where parallel computers can make a difference.

    Many of the approaches to these problems take the form of a grid of elements that have local and possibly non-local interactions with each other. Each processor gets a subset of the points to work with and has to communicate with the neighboring processor's memory space to get information about neighboring points.

    In a cluster, handling the points at the edges (or any non-local effects) requires a network and possibly disk request. Compared to local memory, this is incredibly slow and can temporarily starve the processor.

    Big iron parallel systems address this by giving more processors access to the same memory and other shared resources, avoiding the costly network requests.

    Of course, the current super computers (ASCII *, etc) are all clusters, just with incredibly fast network connections.

    -Chris
  • by bluethundr ( 562578 ) * on Monday September 15, 2003 @10:01AM (#6963399) Homepage Journal
    In an age where clusters are becoming more prevalent for parallel computing I've often wondered where the parallel processor was. How about you?"

    Danny Hillis, the guy who founded ThinkingMachines designed a mchine called The Connection Machine [base.com], (this story [svisions.com] has a cooler, more sci-fi lookin' pic of the old beastie [svisions.com]) the central design philosophy was to achieve MASSIVE computing power through parallelism. It had 65,535 procs, each of lived on a wafer with dram thereon and a high bandwidth connection to up to (if I remember correctly) up to 4 other of the procs. Young sir Danny wrote a book on his exploits, [barnesandnoble.com] well worth checking out (seemingly, it's been calling to me from my bookshelf for about a year now).

    And as someone pointed out, it seems we've seen this topic before. [slashdot.org] I'd have modded him up, [slashdot.org] (hint, hint) but I really like mentioning the connection machine where appropriate.
  • Read the Article (Score:3, Insightful)

    by EnglishTim ( 9662 ) on Monday September 15, 2003 @12:04PM (#6964687)
    Read the article - this isn't the case that you've got a whole bunch of traditional processors and you try and divide the work between them. They're talking about the CPU itself being split into several smaller general units, so that each instruction gets excecuted by several of these units. The instructions are grouped together and then sent to the CPU in blocks. All the work for that block is then split between the units, taking into account any interdependencies. I suppose the closest thing to it would be to have microcode being executed in parallel.

Work without a vision is slavery, Vision without work is a pipe dream, But vision with work is the hope of the world.

Working...