Grid Processing 130
c1ay writes "We've all heard the new buzzword, "grid computing" quite a bit in the news recently. Now the EE Times reports that a team of computer architects at the University of Texas here plans to develop prototypes of an adaptive, gridlike processor that exploits instruction-level parallelism. The prototypes will include four Trips(Tera-op Reliable Intelligently Adaptive Processing System) processors, each containing 16 execution units laid out in a 4 x 4 grid. By the end of the decade, when 32-nanometer process technology is available, the goal is to have tens of processing units on a single die, delivering more than 1 trillion operations per second. In an age where clusters are becoming more prevalent for parallel computing I've often wondered where the parallel processor was. How about you?"
Re:Just out of curiosity.... (Score:2, Informative)
Sun may already be ahead of the game here(!) (Score:4, Informative)
Read about plans for Sun's "Niagra" core [theregister.co.uk]
I understand they hope to create blade systems using high densities of these multiscalar cores for incredible throughput.
There's your parallel/grid computing.
Grid computing? (Score:5, Informative)
And is exemplified by projects like MyGrid [man.ac.uk].
Grid confusion (Score:5, Informative)
Gridlike Computing Vs Grid Computing (Score:3, Informative)
The article doesn't actually have anything to do with "grid computing", but the processor's design is like a grid. The term "grid computing" [globus.org] often refers to large-scale resource sharing (processing/storage).
read the comments from the horse's mouth (Score:5, Informative)
-- emery berger, dept. of cs, univ. of massachusetts
AKA Reconfigurable Computing (Score:3, Informative)
Re:BS & hype (Score:3, Informative)
At 32 nanometers, Intel could put tens of HT pentium cores on a single chip, achieving the same result.
Yes, but any more than 16 logical cores, and your x86 arch won't recognize them. Why? 4 bit cpu identifiers (each logical core under HT identifies itself as a normal processor).
For computational problems that can be broken down into parallel computations, the answer is yes. For all the other types of problems, the answer is no. Although I have to admit that most algorithmic bottlenecks is in iterative tasks that are highly parallelizable.
Very true, but no more true for TRIPS than for any other parallel system. Additionally, just about every computer now does a lot of things in parallel. Think of any multitasking OS. So, worse comes to worse, you can run x number of apps as normal serial executions (though TRIPS wouldn't run any currently exsisting commercial software-- new platform and all, and a test too, not something ready for production by any means).
Unfortunately, it will mean new compilers and maybe programming languages that have primitives for expressing parallelism.
I completely agree.
Re:Just out of curiosity.... (Score:5, Informative)
eg. who cares how many instructions you can process in parallel, if module A requires data from module B. In these cases parallelisation is limited to making each module run faster (if it doesn't have sub dependencies, of course), the entire program doesn't benefit from the parallelisation.
Good examples of parallel processing are the ones we know - distributed apps like SETI@home, graphics rendering, etc.
Bad systems are everyday data processing systems - they typically work on a single lump of data at a time in sequences.
A good source of parallel programming is http://wotug.ukc.ac.uk/parallel/ or, of course, google.
Re:Sun may already be ahead of the game here(!) (Score:2, Informative)
Re:What about Transputers? (Score:3, Informative)
We used transputers on quite a large number of projects right here at the University of Texas.
the NIH principle
Actually, the problem was that they were slow and complicated. They went so long between family upgrades that eventually we could replace a large array of transputers with a few regular CPUs. Not to mention that we can also get a handy little thing like an OS on general purpose CPUs.
programming languages designed for parallelism
Did I mention complicated? Occam was part of the problem. The scientific world wants to program in C or Fortran, or some extension of them, or some library called by them. That's why MPI is so popular.
not all problems can be done faster by doing more of it at once
I'm not sure I agree. Having more capability at each compute node means less need for partitioning. (The part you say is hard.)
Obviously there's a lot of work to be done in parallel processing. You can hardly blame Inmos's problems on geography (or America for Inmos's problems). They looked very promising for awhile, but just didn't keep up.
project home page (Score:2, Informative)
They have some papers available there...
Re:The Parrallel Processor (Score:4, Informative)
Ok, HT double clocks the Cache! so you have two cache's for the price of one! The G5 is a multicore chip so is Cell Linky [zive.net] and The Opteron are all multicore chips, the diffrence (apart for the arch!) is the way VLIW's are feed to each of these. They are NOT paralell processors, paralellisam can be defined as the maintence of cache coherence, it is either inclusive (cray) or excluseive (rs6000), and requries a lot of bandwidth (local x-bar versus network). Where as parallel computers are not cache coherent and have a remote x-bar architechure, it all adds up to the same hypercube.
Re:Die Yields (Score:2, Informative)
Re:The Parrallel Processor (Score:1, Informative)
Fortran 95 oddly enough is multi-processor aware. (Score:5, Informative)
for parallel processing fortran boast many language level features that give ANY code implicit parallelism and implicit multi-threading and implicit distribution of memory WITHOUT the programmer cognizantly invoking multiple threads or having to use special libraries or overloaded commands.
An example of this is the FORALL and WHERE statements that replace the usual "for" and "if" in C.
FORALL (I = 1:5)
WHERE (A(I,:)
A(I,:) = log(A(i;0)
ENDWHERE
call some_slow_disk_write(A(I,:)
END FORALL
the FORALL runs the loop with the variable "i" over the range 1 to 5 but in any order not just 1,2,3,4,5 and also of course can be done in parallel if the compiler or OS, not the programmer, sees the opportunity on the run-time platform. The statement is a clue from the programmer to the compiler not to worry about dependencies. Moreover the program can intelligently multi-thread so the slow-disk-write operation does not stop the loop on each interation.
The WHERE is like an "if" but tells the compiler to map the if operation over the array in parallel. What this means is that you can place conditional test inside of loops and the compiler knows how to factor the if out of the loop in a parallel and non-dependant manner.
Moreover, since the WHERE and FORALL tell the compiler that the there are no memory dependent interactions it must worry about. thus it can simply distibute just peices of the A array to different processors, without having to do maintain concurrency between the array used by different processcors, thus elminating shared memory bottlenecks.
Another parallelism feature is that the header declaration not only declare the "type" of variable
Other rather nice virutes of FORTRAN is that it uses references rather than pointers (like java). And amazingly the syntax makes typos that compile almost impossible. that is, a missing +,=,comma, semi colon, the wrong number of array indicies, etc... will not compile (in contrast to ==, ++, =+ and [][] etc
One sad reason the world does not know about these wonderful features, or repeats the myths about the fortran language missing features is due to GNU. yes I know its a crime to crtisize GNU on slashdot but bear with me here because in this case they desereve some for releasing a non DEC-compatible language.
for the record, ancient fortran 77 as welll as modern fortran 95 DOES do dynamic allocation, support complex data structures (classes), have pointers (references) in every professional fortran compiler. Sadly GNU fortran 77, the free fortran, lacks these language features and there is no GNU fortran 95 yet. This is lack prevents a lot of people from writing code in this modern language. if Gnu g77 did not exist the professional compilers would be much more affordable. So I hope some reader who know about complier design is motivate to give the languishing GNU fortran 95 project the push it needs to finnish.
In the age of ubiquitous dual processing fortran could well become a valuable scientific language due to its ease of programming and resitance to syntax errors
Re:Fortran 95 oddly enough is multi-processor awar (Score:3, Informative)
Re:Fortran 95 oddly enough is multi-processor awar (Score:2, Informative)
I wouldn't put much blame on GNU. Fortran 77 was a fairly unpleasant language, even before GNU existed. Compiler extensions sometimes helped but weren't too great for portability.
Not that I don't want to see a GNU Fortran 95, but if you can tolerate free as in beer software, Intel makes their fortran compiler available for free for noncommercial use on Linux: IFC [intel.com]
There is also the F programming language which is a (mostly) tastefully selected subset of Fortran 95: F [fortran.com]. Mostly it just throws out redundant features and stuff inherited from Fortran 77. It's a little picky in a teaching-language sort of way and takes some getting used to, but I have ported code to F without pulling my hair out. And the code did end up a bit clearer for the changes.