Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Hardware Technology

Grid Processing 130

c1ay writes "We've all heard the new buzzword, "grid computing" quite a bit in the news recently. Now the EE Times reports that a team of computer architects at the University of Texas here plans to develop prototypes of an adaptive, gridlike processor that exploits instruction-level parallelism. The prototypes will include four Trips(Tera-op Reliable Intelligently Adaptive Processing System) processors, each containing 16 execution units laid out in a 4 x 4 grid. By the end of the decade, when 32-nanometer process technology is available, the goal is to have tens of processing units on a single die, delivering more than 1 trillion operations per second. In an age where clusters are becoming more prevalent for parallel computing I've often wondered where the parallel processor was. How about you?"
This discussion has been archived. No new comments can be posted.

Grid Processing

Comments Filter:
  • by Stone316 ( 629009 ) on Monday September 15, 2003 @08:58AM (#6962939) Journal
    I'm not sure wabout other platforms but in the case of Oracle, they say you don't require any code changes. Your application should run fine right out of the box.
  • by pr0ntab ( 632466 ) <> on Monday September 15, 2003 @09:04AM (#6962967) Journal
    Normally I don't pimp Sun, but here's something that makes me think they still have a finger on the pulse of things:
    Read about plans for Sun's "Niagra" core []

    I understand they hope to create blade systems using high densities of these multiscalar cores for incredible throughput.

    There's your parallel/grid computing. ;-)

  • Grid computing? (Score:5, Informative)

    by dan dan the dna man ( 461768 ) on Monday September 15, 2003 @09:04AM (#6962968) Homepage Journal
    I still think this is not what is commonly understood by the term "Grid Computing". Maybe it's the environment I work in but to me Grid Computing means something else []

    And is exemplified by projects like MyGrid [].
  • Grid confusion (Score:5, Informative)

    by Handyman ( 97520 ) * on Monday September 15, 2003 @09:06AM (#6962978) Homepage Journal
    It's funny how people always seem to find a way to confuse what is meant by a "grid". The posting talks about a "4x4 grid" without clarification of the term "grid", which is confusing because grid computing has nothing to do with processing units being lined up in a grid. The "grid" in "grid computing" comes from an analogy with the power grid, not from any form of "grid layout". The analogy is based on the fact that with grid computing, you simply plug your "computing power client appliance" (not necessarily a PC, could be the fridge) into the "computing power outlet" in the wall (a network port, usually), and you can "consume computing power", like you would do with electricity. Computational grids don't even necessarily have to support parallel programs; it is easy to imagine grids that have a maximum allocated unit of a single processor. What makes such grids grids is that you can allocate the power on demand, when you need it, instead of that you have to have your own "computing power generator" (read: megapower CPU) at home.
  • by jedigeek ( 102443 ) on Monday September 15, 2003 @09:08AM (#6962993) Journal

    We've all heard the new buzzword, "grid computing" quite a bit in the news recently.

    The article doesn't actually have anything to do with "grid computing", but the processor's design is like a grid. The term "grid computing" [] often refers to large-scale resource sharing (processing/storage).
  • by Ristretto ( 79399 ) <<> <ta> <yreme>> on Monday September 15, 2003 @09:16AM (#6963041) Homepage
    This story already appeared [], but was posted by someone who was not confused by the use of the term "grid"... Doug Burger, one of the two key profs on this project (and no relation!), answered lots of questions, which you can see here [].

    -- emery berger, dept. of cs, univ. of massachusetts
  • by yerdaddie ( 313155 ) on Monday September 15, 2003 @09:18AM (#6963054) Homepage
    The ability to adapt the architecture for the workload, as discussed in this article, is something common to many different reconfigurable computing architectures [] like:
    Quite a number of researchers are looking at the performance and density [] adavantages of reconfigurable architectures in addition to the work mentioned in this article. What's really intriguing is considering how opreating systems could support reconfiguration []. Doesn't seem to be much work on the subject.
  • Re:BS & hype (Score:3, Informative)

    by Valar ( 167606 ) on Monday September 15, 2003 @09:35AM (#6963163)
    It's not as much hype as you would think (in the interest of full disclosure, I am a UT EE student and about half of my posts now on /. seem to be talking about something the university has done...). Yes, grid computing is a bad term for it, because it's already taken. I'm not sure whose fault it was that it got labelled that, but I doubt it was one of the guys actually working on this. They all seem like competitent lads. Now for what I actually have to say:

    At 32 nanometers, Intel could put tens of HT pentium cores on a single chip, achieving the same result.
    Yes, but any more than 16 logical cores, and your x86 arch won't recognize them. Why? 4 bit cpu identifiers (each logical core under HT identifies itself as a normal processor).

    For computational problems that can be broken down into parallel computations, the answer is yes. For all the other types of problems, the answer is no. Although I have to admit that most algorithmic bottlenecks is in iterative tasks that are highly parallelizable.
    Very true, but no more true for TRIPS than for any other parallel system. Additionally, just about every computer now does a lot of things in parallel. Think of any multitasking OS. So, worse comes to worse, you can run x number of apps as normal serial executions (though TRIPS wouldn't run any currently exsisting commercial software-- new platform and all, and a test too, not something ready for production by any means).

    Unfortunately, it will mean new compilers and maybe programming languages that have primitives for expressing parallelism.
    I completely agree.
  • by gbjbaanb ( 229885 ) on Monday September 15, 2003 @10:00AM (#6963388)
    Most parallel systems only work for a certain type of problem - one where processing can be split into many small chunks, each one non-dependant on the others.

    eg. who cares how many instructions you can process in parallel, if module A requires data from module B. In these cases parallelisation is limited to making each module run faster (if it doesn't have sub dependencies, of course), the entire program doesn't benefit from the parallelisation.

    Good examples of parallel processing are the ones we know - distributed apps like SETI@home, graphics rendering, etc.

    Bad systems are everyday data processing systems - they typically work on a single lump of data at a time in sequences.

    A good source of parallel programming is or, of course, google.

  • by stevesliva ( 648202 ) on Monday September 15, 2003 @10:14AM (#6963526) Journal
    A more detailed article. [] IBM has been doing dual-core processors in it's flagship Power line for a few years now, although it appears higher numbers of cores per die will only be appearing in more experimental IBM projects. Except perhaps the PS3 Cell Processor [], a collaboration of IBM and Sony. Since the Cell group is based in Austin, there's likely to be some collaboration between TRIPS and Cell. As a matter of fact, they sound very similar.
  • by GregAllen ( 178208 ) on Monday September 15, 2003 @10:15AM (#6963536) Homepage
    no one in America noticed them
    We used transputers on quite a large number of projects right here at the University of Texas.

    the NIH principle
    Actually, the problem was that they were slow and complicated. They went so long between family upgrades that eventually we could replace a large array of transputers with a few regular CPUs. Not to mention that we can also get a handy little thing like an OS on general purpose CPUs.

    programming languages designed for parallelism
    Did I mention complicated? Occam was part of the problem. The scientific world wants to program in C or Fortran, or some extension of them, or some library called by them. That's why MPI is so popular.

    not all problems can be done faster by doing more of it at once
    I'm not sure I agree. Having more capability at each compute node means less need for partitioning. (The part you say is hard.)

    Obviously there's a lot of work to be done in parallel processing. You can hardly blame Inmos's problems on geography (or America for Inmos's problems). They looked very promising for awhile, but just didn't keep up.
  • project home page (Score:2, Informative)

    by the quick brown fox ( 681969 ) on Monday September 15, 2003 @10:43AM (#6963799)
    project home page []

    They have some papers available there...

  • by Adm1n ( 699849 ) on Monday September 15, 2003 @11:09AM (#6964084)
    No no no.
    Ok, HT double clocks the Cache! so you have two cache's for the price of one! The G5 is a multicore chip so is Cell Linky [] and The Opteron are all multicore chips, the diffrence (apart for the arch!) is the way VLIW's are feed to each of these. They are NOT paralell processors, paralellisam can be defined as the maintence of cache coherence, it is either inclusive (cray) or excluseive (rs6000), and requries a lot of bandwidth (local x-bar versus network). Where as parallel computers are not cache coherent and have a remote x-bar architechure, it all adds up to the same hypercube.
  • Re:Die Yields (Score:2, Informative)

    by Adm1n ( 699849 ) on Monday September 15, 2003 @11:16AM (#6964162)
    Die verifacation will be modified to accomidate the core level verifacation prior to multiple cores bieng used. Since you are layering dies on one another they will be verified individually, then as a whole if they do not add up as individuals then off to the scrap heap. But that all depends on the number of cores and process. Keep in mind that currently design sofware limits are around 20K layers of interconnects, so if a core is only 20 layers of interconnects (not uncommon) it's only 100 layers if its scrap and since it's vapor deposition the losses are neglegable (compareable to white noise or pennies on the hundred). Fab's spend more finding problems (and fixing them) than they do on materials. Yelds are much more prone to design flaws and external condition errors than failure due to a singular element (rmember the Pentium Floating Point error due to the capacitors not bieng sprayed at the right density?).
  • by Anonymous Coward on Monday September 15, 2003 @11:33AM (#6964337)
    G5 aren't multiple core CPU. However, IBM POWER4, which they are derived from, are dual core CPU.
  • by goombah99 ( 560566 ) on Monday September 15, 2003 @11:35AM (#6964351)
    Fortran is NOT for every day programming of word processors and such. However the Modern Fortran Language probably ought to be the choice for most scientific programming, its just that people think of it as an "old" as in decrepit Languange and dont learn it.

    for parallel processing fortran boast many language level features that give ANY code implicit parallelism and implicit multi-threading and implicit distribution of memory WITHOUT the programmer cognizantly invoking multiple threads or having to use special libraries or overloaded commands.
    An example of this is the FORALL and WHERE statements that replace the usual "for" and "if" in C.

    FORALL (I = 1:5)
    WHERE (A(I,:) /= 0.0)
    A(I,:) = log(A(i;0)
    call some_slow_disk_write(A(I,:)

    the FORALL runs the loop with the variable "i" over the range 1 to 5 but in any order not just 1,2,3,4,5 and also of course can be done in parallel if the compiler or OS, not the programmer, sees the opportunity on the run-time platform. The statement is a clue from the programmer to the compiler not to worry about dependencies. Moreover the program can intelligently multi-thread so the slow-disk-write operation does not stop the loop on each interation.

    The WHERE is like an "if" but tells the compiler to map the if operation over the array in parallel. What this means is that you can place conditional test inside of loops and the compiler knows how to factor the if out of the loop in a parallel and non-dependant manner.

    Moreover, since the WHERE and FORALL tell the compiler that the there are no memory dependent interactions it must worry about. thus it can simply distibute just peices of the A array to different processors, without having to do maintain concurrency between the array used by different processcors, thus elminating shared memory bottlenecks.

    Another parallelism feature is that the header declaration not only declare the "type" of variable ,as C does, but also if the routine will change that variable. This lets the compiler know that it can multi-thread and not have to worry about locking an array against changes. In the example, the disk-write subroutine would declare the argument (A) to be immutable. Again the multi-threading is hidden from the user, no need for laborious "synchronize" mutex statements. It also allows for the concept of conditionally-mutable data.

    Other rather nice virutes of FORTRAN is that it uses references rather than pointers (like java). And amazingly the syntax makes typos that compile almost impossible. that is, a missing +,=,comma, semi colon, the wrong number of array indicies, etc... will not compile (in contrast to ==, ++, =+ and [][] etc ...).

    One sad reason the world does not know about these wonderful features, or repeats the myths about the fortran language missing features is due to GNU. yes I know its a crime to crtisize GNU on slashdot but bear with me here because in this case they desereve some for releasing a non DEC-compatible language.

    for the record, ancient fortran 77 as welll as modern fortran 95 DOES do dynamic allocation, support complex data structures (classes), have pointers (references) in every professional fortran compiler. Sadly GNU fortran 77, the free fortran, lacks these language features and there is no GNU fortran 95 yet. This is lack prevents a lot of people from writing code in this modern language. if Gnu g77 did not exist the professional compilers would be much more affordable. So I hope some reader who know about complier design is motivate to give the languishing GNU fortran 95 project the push it needs to finnish.

    In the age of ubiquitous dual processing fortran could well become a valuable scientific language due to its ease of programming and resitance to syntax errors

  • There's a good book explaining a lot of this stuff in detail available from O'reilly []. I can vouch for it having some neat stuff, and it covers how to write fortran in such a way as to take advantage of the parallelism features.
  • by Pig Bodine ( 195211 ) on Monday September 15, 2003 @12:57PM (#6965256)
    Sadly GNU fortran 77, the free fortran, lacks these language features and there is no GNU fortran 95 yet. This is lack prevents a lot of people from writing code in this modern language.

    I wouldn't put much blame on GNU. Fortran 77 was a fairly unpleasant language, even before GNU existed. Compiler extensions sometimes helped but weren't too great for portability.

    Not that I don't want to see a GNU Fortran 95, but if you can tolerate free as in beer software, Intel makes their fortran compiler available for free for noncommercial use on Linux: IFC []

    There is also the F programming language which is a (mostly) tastefully selected subset of Fortran 95: F []. Mostly it just throws out redundant features and stuff inherited from Fortran 77. It's a little picky in a teaching-language sort of way and takes some getting used to, but I have ported code to F without pulling my hair out. And the code did end up a bit clearer for the changes.

If I have seen farther than others, it is because I was standing on the shoulders of giants. -- Isaac Newton