Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

San Francisco Flashmob Attempts Supercomputer 148

aspelling quotes a story that says " Hundreds of area technophiles wired their computers together in an attempt to generate computing power on a par with the world's strongest supercomputers. Organizers hoped to break into the ranks of the world's top 500 supercomputers through the event, which they called "Flashmob I."
This discussion has been archived. No new comments can be posted.

San Francisco Flashmob Attempts Supercomputer

Comments Filter:
  • My place? (Score:2, Funny)

    by mindstormpt ( 728974 )
    Next time they can do it at my place!

    But there will be a lot of ponch and pie, maybe then they'll forget the pcs and I get my personal supercomputer so I can take over the world muahahahhaha

    I'm dreaming.
  • by zecg ( 521666 ) on Sunday April 04, 2004 @10:19AM (#8761295)
    ...is that they attained 180 gigaflops, failing short of the goal.
    • That's it? That's like only 10 of the year-old Mac that I have.

      Really, 180 gflops isn't very much, and I think certainly not deserving of the "mob" moniker.
    • by mhifoe ( 681645 ) on Sunday April 04, 2004 @10:31AM (#8761353)
      By way of comparison, Seti@Home managed 64 Teraflops over the last 24 hours.
    • by Anonymous Coward
      A single 3.2GHz Pentium 4 can theoretically do 6.4GFLOPS (given two floating point operations per clock cycle). So at least 30 processors would be needed for 180GFLOPS. Of course, "theoretical" is the key word here. In the real life, many, many more processors would be needed.
    • by Tweaker_Phreaker ( 310297 ) on Sunday April 04, 2004 @02:38PM (#8762555)
      It only attained 180 Gflops because they were only able to get 256 computers working properly. They were using a 2.5 linux kernel which AFAIK had trouble with nforce ethernet ports which may have ruled out all the AMD's from working. They really didn't provide participants with much information about what was going on. After booting my computer and having it do a quick linpack test, I had to leave the area for the rest of the day. they should have at least had us stay long enough to get it to connect to the cluster. For the rest of the day they had very boring lectures about super-computing that had very generalized information. I never heard anything about what was going on until the end when they just told us that they had gotten 256 computers to achieve 180 Gflops and that we could leave.
      • They were using a 2.5 linux kernel which AFAIK had trouble with nforce ethernet ports which may have ruled out all the AMD's from working.

        Well, all the AMDs except for the ones which weren't running on nforce boards, anyway.

  • um, yeah (Score:4, Insightful)

    by LOL WTF OMG!!!!!!!!! ( 768357 ) on Sunday April 04, 2004 @10:20AM (#8761303) Journal
    "Flashmob is about democratizing supercomputing," said John Witchel, a graduate student at USF who codeveloped the concept. "It's about giving supercomputing power to the people so that we can decide how we want supercomputers to be used."

    People HAVE decided what they want to contribute to, hence SETI@Home, distributed.net, folding@home, etc.
  • by Azadre ( 632442 ) on Sunday April 04, 2004 @10:20AM (#8761304)
    I thought it read the Mob experiments with a supercomputer. I guess Mr. Soprano and his boys really want to be a part of the 21st century
    • COSMO
      There I was in prison, and, one day I help a couple of nice older gentlemen make some free telephone calls. They turn out to be...let us say "good family men".

      BISHOP
      Organized crime?

      COSMO
      Heh. Don't kid yourself. It's not that organized. Anyway, they arranged for me to get an early release from my "unfortunate incarceration" and I began to perform a variety of services.
  • by Biotech9 ( 704202 ) on Sunday April 04, 2004 @10:21AM (#8761310) Homepage
    But they had a small number of computers (less than they aimed at), and they still managed to get almost half the score required to get in the top 500 list of supercomputers.

    While thats not too impressive, it does mean all those Universities with labs and labs of Dells could try something similar. My old Uni had hundreds of 2.4GHz dells with a pretty decent spec, and they were mainly used for checking mail or playing Quake. Perhaps with something like this Flashmob [flashmob.com] they can be used for something a little more demanding.
    • by roboros ( 719352 ) on Sunday April 04, 2004 @10:53AM (#8761420) Journal
      There is a mature software project called Condor [wisc.edu] which allows you to do exactly that, build a compute pool out of workstations and optionally dedicated servers. It detects when workstations are idle and matches jobs to suitable resources (architecture, amount of memory and so on). It also handles restarting jobs that fail, migrating jobs when a workstation is no longer idle and so on. It is meant for high-throughput computing, i.e. running a lot of independent jobs, not for massively parallel jobs that do lots of intercommunication.

      This can be great for researchers who never seem to get enough computing power or for things like rendering video.

      • by Erwos ( 553607 ) on Sunday April 04, 2004 @01:07PM (#8762062)
        We use this all the time at my University. Absolutely fantastic bit of software, and it can give you another decently-sized cluster "for free" at night as long as you tell everyone to leave their computers on.

        -Erwos
      • It looks to me like this can be run on older machines in the x86 arch (RH 7 series to be precise). Would this be suitable for a home network if you wanted to use the idea for multitasking apps, in lieu of having a much faster/bigger/ more blinkenlights shinier machine? i.e., can one cob together old boxes you might have kicking around to make a single box that works better for normal surfing,listening to music, simultaneously, etc in replacement of something new and expensive? Poor man's SMP machine?
        • by roboros ( 719352 )
          No, you probably don't have any use for Condor for home computers used for surfing etc. It does not work at all like an SMP machine. Condor is for running a bunch of compute-intensive batch jobs, i.e. jobs that are not interactive, such as scientific simulations. Also, you can never take a multithreaded ordinary program and automatically split its execution on multiple computers, because the computers have separate address spaces and too slow network to simulate a shared address space, so a single applicati
          • by zogger ( 617870 )
            hey, thanks for the tip! Never thought of that, would do exactly what I am looking for, something like web browser on one machine, xmms on another, chat and console jazz on another, etc and etc. I will attempt to implement this, I have a trio of old P1 machines that are identical, and hadn't come up with a use for them so far, this might be the ticket. My "main" machine is a 200PP, it multi tasks, but.... really......
  • Only if (Score:2, Interesting)

    by Anonymous Coward
    some kind of standard distribution deamon/software was installed into every computer that is connected to the internet, would we have the worlds fastest supercomputer cluster.

    All that computing power wasted.
    I wonder what would happen if M$ did this?
    • Re:Only if (Score:2, Interesting)

      by Ann Elk ( 668880 )
      Microsoft did something like this (on a much smaller scale) about 8 years ago. A few hundred idle computers scattered around the MS campus created the Chicken Crossing [glassner.com] animation that was presented at Siggraph 96.
    • It more or less already exists, it's called Altnet [altnet.com] and is installed together with Kazaa. Their website claims that the Altnet network has millions of computers.

      For more serious applications I doubt that you want to send your confidential data for computation on some random stranger's computer, given the amount of viruses, trojans etc. that are in circulation today and how "good" ordinary users are at keeping their systems secure. Encryption won't help as long as the computer is under full control of the u

  • Just hundreds? (Score:4, Insightful)

    by rastakid ( 648791 ) on Sunday April 04, 2004 @10:23AM (#8761323) Homepage Journal
    "Hundreds of area technophiles wired their computers together in an attempt to generate computing power on a par with the world's strongest supercomputers. Organizers hoped to break into the ranks of the world's top 500 supercomputers through the event, which they called "Flashmob I."

    Hundreds of geeks with their computers? And they want to build a top500-supercomputer? I think they'll need a little more power...

    I can't imagine only 'hundreds' showed up, why stay at home when you could write history? Nice chance to get 15 minutes of fame ;)
  • by s20451 ( 410424 ) on Sunday April 04, 2004 @10:24AM (#8761324) Journal
    Flashmobs are to the 2000's as streaking was to the 70's; merely the obnoxious fad of the times. Thirty years from now, flashmobs will be footnote of history, dragged out of archival footage whenever news shows (or whatever replaces news shows) want to give context to the time.

    So, as a watershed event, I find flashmob computing to be lacking.
    • by Anonymous Coward
      Thirty years from now, flashmobs will be footnote of history, dragged out of archival footage whenever news shows (or whatever replaces news shows) want to give context to the time.

      Cool. I look forward to sitting in front of the holovision in 2034 and watching Hal Sparks and Michael Ian Black making witty comments about flashmobs on VH1's "I Love the 00s"
    • Bah, streaking is still cool (and still happens.)
    • Flashmobs are to the 2000's as streaking was to the 70's; merely the obnoxious fad of the times. Thirty years from now, flashmobs will be footnote of history, dragged out of archival footage whenever news shows (or whatever replaces news shows) want to give context to the time.

      Maybe not even that ... it's not like those of us in the normal part of the country ("flyover county") have ever even seen a "flashmob" ... nor are a bunch of idiots standing around inappropriately quite as visually arresting as a

  • by richard_za ( 236823 ) on Sunday April 04, 2004 @10:24AM (#8761327) Homepage Journal
    This story [mercurynews.com] has also been covered on The Mercury News [mercurynews.com]:

    "By the end of the day, FlashMob was a partial success. The crew managed to get 256 computers working together at almost half the speed required for the top 500 status."

    If this was the first attempt of breaking this record, I reckon it will be a matter of weeks or even days that this is achieved.
    • If this was the first attempt of breaking this record, I reckon it will be a matter of weeks or even days that this is achieved.

      For what purpose? Just because a bunch of guys brought their machines together for a few hours and wired them up into a cluster doesn't mean it's a supercomputer. How long are they going to number crunch? Until most of the nodes have to go home for dinner? It'd make more sense to do a distributed.net or SETI@Home style supercomputer over the Internet. Besides, there's noth

      • If I went and bought 10,000 Mac G5's:
        they didn't have to buy the pc's

        Also it would be useful to know to build a supercomputer with commodity hardware, and to work out all the problems that having lots of different types of pc's, problems which would not exist if everybody had the same Mac G5's.
    • If this was the first attempt of breaking this record, I reckon it will be a matter of weeks or even days that this is achieved.

      Assuming the complexity of adding on extra computers doesn't overpower their added speed.

  • Geez... (Score:5, Insightful)

    by cybrchrst ( 535172 ) on Sunday April 04, 2004 @10:26AM (#8761333) Homepage
    Maybe they should call it a FlashBeowulf..

    The question is not whether this can be done-- the question is what exactly do you need to run on a supercomputer that requires a flash mob? If this is about democracy and putting power in the hands of the people, that's all good, but what is this supposed to prove? That garment workers in Sri Lanka can leave their factories simultaneously and put together a super-computer so that they can..... what? Solve a really complex mathematical problem?

    Putting democracy in the hands of these kind of people is about having your voice be heard and having it make a difference, not to make impromptu supercomputers. A supercomputer is not going to save sweatshop workers in China or Sri Lanka or Mexico or whereever.

    Nice geek thing to do though. Maybe next time they can do it in the middle of a street and Reclaim the Streets!! [reclaimthestreets.net]

    • What does this prove? Well suppose that you need many people to act in unison at a specific time to achieve a great end - like stopping an asteroid or preventing an epidemic.

      This is a benchmark or demonstration of how well we can do this.

      I think the supercomputer experiment was kind of weakly planned. They already ran tests showing that 60 1-GHz computers run at 46.8 GFlops [flashmobcomputing.org]. They need 60000 at least to go 46.8 TFlops, which would wallop the Earth Simulator. I doubt most people would be bringing their 3.06
      • No, according to the organizers, this is supposed to show that people can have this kind of computing power anywhere in the world.

        Now please explain why or how it would be up to normal citizens to do something like stop an asteroid. Not all the computing power in the world is going to stop something moving at 50 miles per second straight at the planet.
    • Actually, the flashmob philosophy is about mob rule, and putting the power in the hands of anybody who can manipulate enough people. It's fine for some liberals to 'prove the concept' but the end result will be populist lynch mobs if the snowball continues to roll.

      It doesn't have much to do with democracy. It might be pretty effective at smashing abortion clinics or destroying the office buildings of particularly litigious attorneys. Be careful of what you promote.

      • ...and other assorted goon forces already do this, using their exisiting telecommunications "flashmob" net, and frequently abuse large numbers of people by bringing to bear overwhelming despotic force onto small numbers of helpless victims. I see the concept (if taken into the political activism arena) to be a sort of leveling of the playing field, perhaps not as many massacres and abuses might occur if enough concerned citizens can be mustered in time to force a stand down of those despotic forces. Even
        • I'm glad to see some tempered reason here. There's a shortage of it too often on slashdot.

          Something to be mindful of is that the mobs that swarmed through Europe around the middle of the 20th century were fostered by a political organization (which I'll leave as nameless) that was one of the first to figure out how to harness the power of mass communications. Which they used to great effect in their conquests, for more than a decade before being stopped. Look at their rhetoric. They claimed the mantle
    • Nice geek thing to do though. Maybe next time they can do it in the middle of a street and Reclaim the Streets!!

      I don't get it? They advocate standing in streets to make a difference?

      Seems to be as equally useless as supercomputers with nothing to compute.
  • by richard_za ( 236823 ) on Sunday April 04, 2004 @10:35AM (#8761366) Homepage Journal
    I found some more information on the USF [usf.edu] Flashmob Computing site [flashmobcomputing.org]. To join you get a CD-ROM which boots your computer: The CD-ROM contains everything you need including an operating system, networking and configuration software and the benchmarking software.

    They will be publishing the ISO so we can all go out and create or own flashmobs.
  • by erbert ( 768166 ) on Sunday April 04, 2004 @10:38AM (#8761373) Homepage
    SOOOO...to be part of the revolution all we have to do is find hundreds of 'technofiles' with the time to bring their computers to a gymnazium, network them together, and then what? elect a leader? Sounds messy. A big LAN party with UT 2004 sounds more reasonable. Everybody can get their own rocket launcher and do whatever they heck they want to with it.
  • Biggest SF LAN? (Score:2, Interesting)

    by lotsofno ( 733224 )
    If nothing, i would've loved to have been there just for [url=http://winamp.com/about/article.php?aid=10562 ]the 660 person LAN game they were supposed to have afterwards[/url].
    • ur, sorry about that... mod parent down, and click here [winamp.com] for the biggest SF LAN link.
    • Only about 20 or 30 of us stayed after for the LAN. We played a lot of UT2004, Quake I and even some Natural Selection. The most important part I think was that we found a whole bunch of new people that want to have LAN parties so we're going to try to get a monthly thing going.
  • by Faust7 ( 314817 ) on Sunday April 04, 2004 @10:40AM (#8761384) Homepage
    Other computers, especially older models with the minimum amount of RAM, gave the organizers trouble even if they succeeded in running the FlashMob software. The computers couldn't process data fast enough, or make enough memory available, to keep up with the group and had to be taken offline.

    Computer Darwinism in action, folks.
  • by richard_za ( 236823 ) on Sunday April 04, 2004 @10:53AM (#8761418) Homepage Journal
    More coverage (copied from USF site):
    NY Times [nytimes.com],
    C|Net [com.com],
    San Jose Business Journal [bizjournals.com],
    NPR [publicradio.org],
    UK PC Pro [pcpro.co.uk]
    USF News [usfca.edu]
  • by panurge ( 573432 ) on Sunday April 04, 2004 @11:21AM (#8761526)
    Although a G5,P4 etc. can theoretically do several GFlops, AFAIUI they cannot reach a sustained GFlop on a large scale problem because there is insufficient cache memory. The peak speed across the bus is only a few Gbytes/sec, and to sustain a GFlop on a large problem could mean a maximum data rate of 30Gbytes/sec (2 80-bit reads and 1 80-bit write per flop)and a minimum in excess of 10.

    And that's before taking network and hard disk throttling into account.

    Has anyone done any work on the actual sustainable processing rate for large data sets using currently available operating systems and hardware?
    Is Linpack actually representative? Forget all those graphics-bound melons-to-potatoes "benchmarks".

    • by Anonymous Coward
      The G5 *can* sustain several GFlops on the right problems. If all you want to do is non-local copy commands over a network of some kind, then no, it will not sustain. If there is L1 and L2 cache useage on some problems (as is true on some math problems) then the G5 is perfect. I am working on some Altivec performance algs for the G5 and on some problems cannot get maximum bandwidth from DRAM which means I am compute bound.

      To answer your question about people who work on sustainable problems--yes, I do.
      • Well, how about four P4s, instead of your G5? Dollars for dollars, etc. etc. You don't need all that expensive Apple window-dressing for a compute-intensive application. Bare motherboards on rack shelves rule.

        • by Anonymous Coward
          I look at all hardware and will not limit myself to on specific brand.
          Example:
          On a 1D 1K complex FFT, I can get more than twice the computational output on the G5/970/970FX than on the P4 @ 3.something. The P4 is running the best Intel code available ($$) and it looks like it is efficient (from a pipeline perspective) on a sim. There don't seem to be many more stalls than usual. The G5 on the other hand still has stalls so there is room for improvement.

          P4 is faster than P4 Xeon, and faster than the Ita
      • That's interesting and supports my suspicions. I've always assumed that, when push comes to shove, IBM has more and better resources to throw at architectures for solving serious problems than Intel. Intel now seems to be backtracking away from the P4 towards the derivatives of the PIII architecture despite its two competitors clocking around 2GHz (I'm counting the G5 and AMD64 as the competitors).

        Anyway, thanks for the info.

    • It all depends on how much temporal locality you have in your algorithm. If you are trying to get the dot product of two gigantic vectors, you will be limited by memory bandwidth. If you're trying to multiply or invert matrices, there is a great deal of temporal locality, and recent P4s can sustain over 4Gflops on large matrices.
  • Could you imagine a beowulf cluster of FlashMobs?!??
  • by Anonymous Coward
    Sticking a bunch of PCs together with ethernet cables is likely only to be helpful for those problems which are referred to as "embarassingly parallel" (e.g. SETI@Home). Not only do the PCs not have enough cache power, but especially the latency and bandwidth issues are going to kill them as soon as they try to tackle anything REMOTELY like "global warming or AIDS research." Let me know when they connect via something faster like Infiniband. The point is not that "supercomputing is only the in hands of
    • The point is not that "supercomputing is only the in hands of the few," the point is that supercomputing is the hands of those who know what they're doing.

      When I installed linux for the very first time, I definitely had no clue of what I was doing. And that was when slackware was pretty much the only distribution in town. The point is: I learned.
  • real application (Score:4, Insightful)

    by albertoiii ( 86778 ) on Sunday April 04, 2004 @12:19PM (#8761812)
    A company like Pixar, which needs a lot of computing power, should host a flashmob, and give people who come a free ticket to the movie they just helped render.
    • by SoTuA ( 683507 )
      but, in the real world, a company like pixar contracts a render farm that does a much better work than a bunch of l33t b0x3n.
    • A company like Pixar, which needs a lot of computing power, should host a flashmob, and give people who come a free ticket to the movie they just helped render.

      After the first viewing, the producer looks at the technical director, "How the #@$% did RMS get in our movie, and why is our main lead a freaking penguin?! That's the last time I let geeks render my movie! Get me Microsoft on the phone..."

      -Adam
  • Feh (Score:3, Insightful)

    by Illserve ( 56215 ) on Sunday April 04, 2004 @12:27PM (#8761856)
    From the article:
    Organizers hope the Flashmob concept can eventually be applied to problems requiring high-powered computing such as the study of global warming or AIDS research.

    Cmon, do they really think people are going to crowd together in a high school gym to do AIDS research?

    Of course not. What an obvious attempt to put a facade of legitimicy onto a publicity stunt. As with all Flashmobs, this one is just an attempt to gain attention. This line sums it up nicely:

    "I just want to be part of history," said Glenn Montano, a USF senior majoring in computer science.

    At least he's being honest.

  • Actually, I've had a thought. The new generation of AMD64 portables should eventually have 64 bit Linux drivers for all the important stuff, including the gigabit Ethernet. Perhaps someone should start planning a walk in supercomputer creation based around machines like the eMachines 6805/6807 and the Acer 1501/1502.
    As they both have easily interchangeable hard disks, it should even be possible to release a suitable disk image in advance.
    And when the G5 Powerbook or whatever emerges, the opportunity for the
  • by frankie ( 91710 ) on Sunday April 04, 2004 @12:39PM (#8761929) Journal
    FlashMobComputing is unlikely to get much higher up the GFlops ladder as long as their network runs on 100baseT ethernet. Unlike SETI or folding@home, most supercomputing problems are not "embarrassingly parallel" [google.com]. Things like Top500's Linpack are mainly bandwidth-bound, not CPU-bound.

    Remember that when Apple & Virginia Tech designed Big Mac [top500.org], both firewire and gigabit ethernet were built-in, but rejected as being too slow.

    Meanwhile, we're all ignoring that "supercomputing for the masses" is already here. The original Cray-1 supercomputer in 1976 ran at a whopping 75MHz with 160 megaflops. Today you can get that much power in a palmtop.

    • by Anonymous Coward
      Ugh. No, dense matrix operations (like the Linpack benchmark used for Top500) are definitely compute bound. The more interesting test would be =sparse= matrix factorization. This is the kind of app that turns a "cluster supercomputer" into a gigantic piece of crap. I've heard rumblings that Jack Dongarra wants to change the Top500 tests to include some stuff beside dense matrix stuff to give a more balanced view of a comupter's performance.

      All these "off-the-shelf" computers also tend to ignore the usa
    • Meanwhile, we're all ignoring that "supercomputing for the masses" is already here. The original Cray-1 supercomputer in 1976 ran at a whopping 75MHz with 160 megaflops. Today you can get that much power in a palmtop.

      Meanwhile meanwhile, some people still insist that a supercomputer is little more than a fast calculator. A supercomputer is one which can process huge amounts of information in a short amount of time. While the Cray was slow, comparatively, at the time it could process and move data sets
      • While the Cray was slow, comparatively, at the time it could process and move data sets like no palmtop today can even approach

        Yes, a true supercomputer is defined not only by its FLOPs but also by its memory throughput. The Cray-1 in 1976 had 8MB RAM on a 64bit bus at 12.5ns (80MHz). Ignoring its 4 cycle latency, that works out to 640MBps for $88 million (until your dataset exceeds 8MB in a few milliseconds and you have to go to disk).

        For comparison, several current palmtops have 64MB of PC100 SDRAM. O

  • by JoeCommodore ( 567479 ) <larry@portcommodore.com> on Sunday April 04, 2004 @01:15PM (#8762099) Homepage
    oh. Never mind, they've already gone. :-(
  • by stevetures ( 656643 ) <stevetures@NOSpaM.gmail.com> on Sunday April 04, 2004 @03:35PM (#8762894) Homepage Journal
    Howdy all,
    I'm a full time tech/sysadmin at the USF law grad complex, and I was able to donate 4 machines (5 cpus) that day. The UT2004 lan party that I stayed with for a little while was fun.

    Though there was a fair amount of organizational victories and failures, there was a high amount of business donation, and it seemed like the community responded with a pretty small pile of computers. I would guess the one thing missing was a feeling of community input to the project. I would like to see another bay area flashmobsupercomputer with an elected board in charge of the project instead of it being in the hands of an encumbered (both politcally for inside USF and out).

    If we take the time to talk more about standards implemented, and make sure that the community can have a chance to contribute, I think top 500 is a realistic goal.

    PS. if you look at the photos, you'll notice at least have of the net connections and tables are empty. Feel free to send me questions and I'll answer them as best as I can.
  • Could you imagine being there when these hundreds of computer geeks attempt such a thing? Yikes! What a complete waste of time ...Like going to a computer club meeting but worse by a factor of ten.
  • OK, suppose you've got a large gang of geeks who are torqued at M$ for some bit of encrypted software and they put together one of these mob-super-clusters. Someone gets a wild hare idea to factor the crypto key and proposes it to the mob, who bereft of common sense and lawful reflection decide to break the crypto before Responsible Parties can shut them down.

    We have long known that mobs can give rise to emergent phenomena which can't be predicted. It'd be amusing to consider what such a mob's clustered cp
  • http://www.apple.com/acg/xgrid/
  • For those who are interested, one of my writers from PimpedOutCases wrote a little article on the event, with plenty of pictures. He was involved, although they booted him (and all other AMD folks) from the final run since apparently the AMD systems were buggering up the computations. As has been mentioned here, it could have been an issue with the nForce ethernet, however my writer says it had something to do with the AMD systems completing their pieces of the computation too fast...although that seems a b

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...