Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

Distributed Computing Economics 130

machaut writes "In a ClusterComputing.org article, Jim Gray, director of Microsoft's Bay Area Research Lab, provides an interesting economic analysis for building distributed systems. When do you choose a grid over a cluster or a supercomputer? When does it pay off to move a task to the data vs moving the data to the task? He takes current hardware and networking costs into account to answer those questions."
This discussion has been archived. No new comments can be posted.

Distributed Computing Economics

Comments Filter:
  • Warning: (Score:5, Funny)

    by pecosdave ( 536896 ) on Monday July 07, 2003 @03:30PM (#6384898) Homepage Journal
    Ungodly numbers of "Beo-Wolf" cluster jokes arriving now!
  • by Anonymous Coward on Monday July 07, 2003 @03:30PM (#6384901)
    How much does it cost to keep hundreds of regular computers (with all their extra peripherals) crunching away vs. a specially designed computer/set of computers.
    • by TopShelf ( 92521 ) on Monday July 07, 2003 @03:50PM (#6385106) Homepage Journal
      That shows how this analysis is done from the perspective of the party performing the investigation, as opposed to society as a whole. For instance, Seti@Home's costs in terms of user electrical, maintenance, etc. isn't considered here...
      • Didja read the article?

        SETI@Home harvested more than a million CPU years worth more than a billion dollars. It sent out a billion jobs of 1/2 MB each. This petabyte of network bandwidth cost about a million dollars. The SETI@Home peers donated a billion dollars of free CPU time and also donated 1012 watt-hours which is about 100M$ of electricity

        No, it doesn't include the value of user-performed maintenance, but as an economic analysis, it would be pretty negligent to not include the value of donated CPU

    • by yintercept ( 517362 ) on Monday July 07, 2003 @04:11PM (#6385323) Homepage Journal
      The whole goal of distributed computing is to externalize costs. When someone else bears your cost...then, yeah, it's free. This is the idea behind P2P. P2P is significantly less efficient than specialized servers, but externalizes costs. In some cases, SETI type arrangements use real idle equipment. In other cases, it pushes real costs onto other unwitting groups.

      Your employer has to pay for the electricity if you leave a bunch of computers on at night to help calculate protein folds. It is not necessarily a bad thing...just that an unwitting party is bearing a cost. In many cases the cost is of little consequence...in some cases it is.
    • Schools and libraries often have old hardware as a result of low hardware and software budgets. It would be interesting to see how much more kick could be gained from the old hardware by setting up network of QNX [qnx.com] machines or other kernel/microkernel with distributed processing capabilities. Many libraries and schools are already familiar with various F/OSS tools like KDE+Mozilla+OpenOffice & co., so swapping out the [micro-]kernel would be unnoticed by the user, except for a possible improved performa
  • Whassatnow? (Score:5, Funny)

    by Anonymous Coward on Monday July 07, 2003 @03:31PM (#6384903)
    When do you choose a grid over a cluster or a supercomputer?

    When you have a really high-paying job where you are paid to make such decisions.
  • s@h, et. al.. (Score:5, Interesting)

    by i.r.id10t ( 595143 ) on Monday July 07, 2003 @03:31PM (#6384913)
    .. have already figured it out - let other willing users pay the power bill, bandwidth cost, etc. and crunch the data in their spare time. Seems to be working well for seti@home, etc.

    Of course, if you are working with sensitive data (military stuff, major trade secrets, etc.) your security/privacy needs will outweigh the costs involved with doing it all in house.
    • Re:s@h, et. al.. (Score:3, Insightful)

      Also, as long as people are still allowed to decide what runs on their own computer you will have to convince them that they should help you with your distributed computing task.

      SETI@Home worked so well because people want to know the answer. People are interested in the results. If you tried to do a distributed apple browning application nobody would download it.

    • ...and SkyNet became self-aware (again) at midnight July 3, 2003...
    • Self-employment (Score:3, Interesting)

      by swb ( 14022 )
      What if it became a self-employment option?

      In other words, my home office, instead of being a revenue-sucking hole filled with computers was instead a source of distributed computing power I could sell on an ad-hoc basis? I eat the tab for the upkeep and get paid cash per work unit I'm able to get done.

      • ISTM that once enough companies start offering grid management services that the price paid to the end-users (the node owners) will come way down and then will probably not be enough to cover electricity for the computers & A/C.
  • hmm... (Score:4, Interesting)

    by xao gypsie ( 641755 ) on Monday July 07, 2003 @03:33PM (#6384931)
    interesting thought, but what is the difference between this and the age old concept of the cost/benefit relationship...? im not trolling, it seems that it is jsut that concept with a tech twist..

    • by Anonymous Coward
      Uhh, nothing. It IS a cost/benefit analysis.
    • Its is cost/benifit, but applied to a specific case.
    • interesting thought, but what is the difference between this and the age old concept of the cost/benefit relationship...? im not trolling, it seems that it is jsut that concept with a tech twist.. br>
      Sure, it is just cost/benefit but that tech twist is what has been missing for years. Important IT decisions have been left to the preferences of IT people instead of conducting a cost/ benefit analysis for the company as a whole. I think this tech twist is very imporant, and we will absolutely see more of
  • I was happy that Gray covered SETI@Home as I think the nature of SETI is akin to where certain aspects of distributed computing may go in the future. However, I argue that he left some some key parts of SETI economics at the door; most notably, data integrity and security. As I understand it, *over half* of SETI's processing power, bandwidth, and so forth is used to verify data integrity as it's using untrusted hosts to do it's calculations.

    This doesn't make SETI a poor supercomputer, but it does change the economics of it. An economic model of computing resources which accounts for work done by untrusted hosts as involving different overhead as that done by trusted hosts would be a much more useful metric to think in terms of.
    • by Vellmont ( 569020 ) on Monday July 07, 2003 @04:47PM (#6385761) Homepage
      I think it's probbably safer to say that seti@home has a huge surplus of computational power, and uses it to verify each result (though it's not strictly necessary to do so). With only one data source (Aerecibo) you can only produce data so quickly, and once you have enough computational power to do the analysis in real time any extra is just surplus that can be used to verify. They did, however later add some extra analysis to the data to take better advantage of the huge surplus of computing power they have.

      The important point though, is that for seti@home each individual workunit, while important isn't critical to the whole project. If a small percentage of workunits aren't computed perfectly it's not catastrophic. In other words there's a certain amount of tolerance for innacuracy. For a project like the OGR [distributed.net] (Optimum Golomb Ruler) by distributed.net each workunit must be calculated perfectly, as the goal is to prove which ruler is the optimum one. If workunit isn't verified you haven't really proven anything, since it's possible (and probbably likely) that hardware failure produced an innaccurate result somewhere in the millions of workunits calculated. (Or perhaps a modified client produced innacurate results). Other distributed computing tasks have different amounts of tolerance for innacurate results.

      Your underlying point is a good one though. For some projects the need for integrity of the results is very high, so larger computing power may be necessary to verify each result.
    • Also, in a grid environment where those executing code do not trust those buying cycles, there is more significant overhead because the farmed-out code must run in a sandbox. So when neither party trusts the other, you might reasonably expect to end up with a 200% reduction in speed, or worse.

      Further, there is a really fundamental bit not mentioned at all in the article -- the effect of network latency. Many parallel applications require frequent synchronization between nodes, and performance quickly bec

  • SETI@home (Score:2, Insightful)

    The articles states that SETI@home has a whopping 54 teraflops of computing power. This is an unfathomable number of cpu cycles, and guess what, it is alled used FOR FREE! This is a great example of how a community of users is willing to sacrifice something (unused cpu cycles and small amounts of bandwidth) to meet some great future goal (contact with extraterrestrials). Did I mention it is FREE? I wonder how much money researchers could save or use in a better fashion if they all used distributed compu
    • Re:SETI@home (Score:5, Informative)

      by flabbergast ( 620919 ) on Monday July 07, 2003 @03:55PM (#6385155)
      The author points out: "The ideal mobile task is stateless (needs no database or database access), has a tiny network input and output, and has huge computational demand."

      "And of course, SETI@Home is a good example: it computes for 12 hours on half a megabyte of input."

      So, for projects that fit this model, then they should save money over supercomputers. But few projects fit this model, with the author mentioning web and data processing, data loading, CFD, ie anything that "generates a continuous and voluminous output stream" as economically unfeasible. So, car companies really do need those supercomputers to virtually crash their cars. =)

    • Sky-net used 60 teraflops in t3. SETI is obviously waiting for aliens to contact them so they can increase the efficiency of their processing to squeeze an extra 6 teraflops out. Then they can release the terminators and wait for the government to give them enough money to buy the world from the machines and rule with an iron fist. CONSPIRACY!!!!!
  • Summary... (Score:2, Insightful)

    Greedy overcharging telcos make grid computing over the internet more expensive than traditional supercomputing, unless you can get people to pay for you (SETI).
  • by fishynet ( 684916 ) on Monday July 07, 2003 @03:37PM (#6384985) Journal
    All the work must(should) be double checked to make sure everything is correct
    • It's not necessarily a bad thing; it just means that distributed computing is better suited to some problems than others.

      There's a whole class [wolfram.com] of problems which can take a tremendously long time to solve, but for which the solution, once found, can be verified very quickly.

      The distributed.net key-cracking contest was like this -- you don't have to double check every piece of work because once you've found the key, it is trivial to test it to make sure it's right. The OGR project works the same way, and

      • The distributed.net key-cracking contest was like this -- you don't have to double check every piece of work because once you've found the key, it is trivial to test it to make sure it's right.

        Almost true. That covers the problem of a "false positive" (client says that this is the correct answer, check it to ensure that it is), but does nothing to counter the problem of a "false negative".

        What if I have "the" RC5-72 key in a block on my harddisk and, for some reason or other, I report that I didn't find
    • Only with untrusted computers, remember. I run a small distributed network at my workplace (the 30 CPUs in the office, about half of which are in the render farm) and don't bother to doublecheck work - if it's wrong, it's my code at fault, and doing it again won't change anything. Nobody's hacking my client for credit - there *isn't* any credit.
  • Microsoft and IBM tout web services as a new computing model - Internet-scale distributed computing.
    They observe that the current Internet is designed for people interacting with computers. Traffic on the future Internet will be dominated by computer-to-computer interactions.


    And that explains why Microsoft has suddenly declared war on spam : they have to free bandwidth for their own .NET message passing. Remember folks, Microsoft never does anything without a reason, and certainly never does anything fo
    • >Remember folks, Microsoft never does anything without a reason, and certainly never does anything for the good of anybody else but themselves.

      And how is this different from you or I act?
      • by Rosco P. Coltrane ( 209368 ) on Monday July 07, 2003 @04:04PM (#6385259)
        And how is this different from you or I act?

        I don't know for you, but I make GPL software, I give it away for free and therefore I give time and money to the community, partly to pursue a certain idea of the computer industry I desire.

        In a way, it's just like people who run the Seti@Home client : they don't do it just "to get a free screensaver" like that Microsoft guy narrowly thinks, they also do it because they want to feel part of a greater, more noble effort than just getting rich quick.

        When was the last time Microsoft gave anything open-source or for free that didn't serve one of their short, medium or long term plans ? I mean, it's okay, they're there to make money and they admit it, there's nothing wrong with this goal as long as they try to achieve it morally and legally, but why should it be the same for everybody ?

        • I was more thinking about how you do give away the code for free for a reason. You have a self-interest in giving the code away, you want the computer industry to be a certain way
          .
          Another way to think about the SETI people is that they are greedy/prideful that THEY could be the one who finds the signal that indicates ET is contacting us.

          Purely self-less acts are very very rare and so could we expect MS (or any corporation) to do so any more than we would ourselves?
    • I fine theory, but I disagree (and I have a nagging suspicion this is a troll, but either way, it bears answering).

      MS is getting on the anti-spam problem because it helps them, yes. But not for some theoretical future savings on bandwidth costs. They're doing it because taking an active role looks good to customers and investors, both of which are increasingly seeing spam as a real problem and not just something us techies talk about.

      Remember folks, publicly held corporations are *legally* driven by one t
      • I have a nagging suspicion this is a troll

        FYI, I never troll.

        You may be right, they may do it to look good, or they may also do it to free bandwidth for their ever-increasing patches and to make .NET a viable product proposition, like I believe. But whatever the reason, I'm only saying they're doing it for a purpose, and people should cross-read declarations made by big corporation reps to find the motive behind their actions.
        • people should cross-read declarations made by big corporation reps to find the motive behind their actions

          I couldn't agree more. In the end, it ALWAYS has to boil down to the bottom line, even if some of the line items in question are more intangible than others (like customer/consumer goodwill).

          Xentax
      • well put but? How the hell will they sell their incredible online gaming software to servers that can't hop around quick enough because of spam traffic. All bets are off if the band width demands of online gamers+spam reach what are the predicted levels.

        The game console sales are way higher than the online percentage of consol owners. If the MS gamers .net heaven vision does happen without a corresponding drop in spam traffic the good old /. effect will be meaningless. No one except perhaps Taco will get o

    • by yintercept ( 517362 ) on Monday July 07, 2003 @04:17PM (#6385378) Homepage Journal
      Traffic on the future Internet will be dominated by computer-to-computer interactions.

      This is already true. Most email traffic these day seems to be marketers talking to spam filters.

  • by Anonymous Coward
    to Microsoft Bay Area Research Facility
  • Spoiler (Score:4, Informative)

    by 4of12 ( 97621 ) on Monday July 07, 2003 @03:43PM (#6385040) Homepage Journal

    .
    .
    .
    Conclusions

    Put the computation near the data.

    My own general take on all this is the Moore's Law for CPU/data costs vs time will beat the decrease in network latency costs vs time and we'll generally expect to see communications protocols become more "intelligent" to compensate up for the this barrier that cannot be overcome. BW will be relatively cheap, but the cost of building up and tearing down a connection will remain high enough to discourage multi-exchange handshaking (ie., UDP model vs TCP model).

    • "Put the computation near the data."

      I don't think it's that simple, at least not for a general purpose system.

      The Seti@home app doesn't care about net latency or bandwidth. Non-realtime Video encoding cares about bandwidth but not latency. Finite element analysis cares about both. Intelligent resource management and task classification will be very important.

      I suspect that as the the field develops, we'll see many existing NUMA techniques simply extended outside the box. The network really is the com
  • by Anonymous Coward
    The last time I read an article from the Microsoft Research guys was in Communications of the ACM. The article was about media center computers (in the article, named Mbox) and digital consumer product consolidation/standardization. Of course there was no mention of Apple and just a brief acknowledgement of TiVo.

    As a strange coincidence, HP and others announced their media center PCs shortly afterwards, followed by Microsoft releasing XBox Live.

    Now the same Microsoft researcher is talking about grids and
    • You mean the one that uses the spare processing power of the XBox live users to number crunch for Microsoft?
    • by steve_l ( 109732 ) on Tuesday July 08, 2003 @12:13AM (#6388683) Homepage
      no doubt ... to date the Grid is very java centric. Now maybe .NET could deliver a speedup, but the nice thing about Java is (a) the latest 1.4.2 JREs use the PIII SSE and P4 SSE2 register sets for better float and double performance, and (b) you can put some serious unix servers in the grid for bonus speed.

      One thing Jim ignored is cost of software. Because MS effectively charge per-CPU for their system, you cannot afford to build a beowulf cluster on windows, let alone a full grid. So if MS do want to play in grid space, they need a way to price their platform so it makes economic sense. Didnt see that in the paper.

      (nb, MS do clustering already, it is just focused at DBs and big IIS installations, and it costs big numbers)
  • Strange math (Score:2, Interesting)

    From the article :

    1 Mbps WAN link $100/month

    From this we conclude that one dollar equates to:
    $1=

    1 GB sent over the WAN


    Oh yeah ? 1 Mpbs for a month == 2678400Gb per month == 334800GB per month. 334800GB / 100 == 3.348TB

    From this I conclude that one dollar equates to 3.348 terabytes sent over the WAN.

    Gosh that was scary. I can restart xMule now ...
  • by bigpat ( 158134 ) on Monday July 07, 2003 @03:50PM (#6385103)
    Just last year we were discussing data transfer over the time it would take to overnight some data in a package, worked out that it was faster and wouldn't clog up our line to burn the DVDs and send them through an international package service vs send it over the T1s. I think with all but the largest businesses this is probably still true for larger (Gigabytes) amounts of data. Network costs are too high to be putting data far from where it is to be used. Whether CEOs realize it or not, this has a great effect on the ways businesses with multiple locations structure their company and work together.

    • by Anonymous Coward
      Never underestimate the bandwidth of a station wagon full of CDROMs.
    • FedEx is more cost effective if latency and manpower are not issues. Though if your still using T1's for data your probably dont have much in the way of data transfer needs T1's are horidly expensive compared to there carring capacity. Often for 10-20 times the cost of a T1 I can get an OC3 link thats have 100 times the carring capacity only usefull if you need the move data all the time. There is also the use of Satalites they can get OC3 speeds at a cost of about 500 an hour assuming all the office are
      • "Often for 10-20 times the cost of a T1 I can get an OC3 link thats have 100 times the carring capacity only usefull if you need the move data all the time."

        Good if you can share costs of an OC-3 with 10 to 20 other companies. Maybe when the office space market heats up a bit, then more buildings will just start providing this service to their tennants. Wonder if the idea of sharing anything would fly in a board room.
        • Buildings have been providing services like this to the internet for years now. I worked with a company back in the mid 90's that braught a DS3 into a building then sold it off to tennants handing them off ethernet and managed firewalls. Even at a DS3 that costs about 10 times as much as a T1 and has 30 times to carring capacity it's a decent return.
          • "Buildings have been providing services like this to the internet for years now"

            It is is the "10 times as much" part of what you are saying that I think you understate the importance of.
    • burn the DVDs and send them through an international package service vs send it over the T1s. I think with all but the largest businesses this is probably still true for larger (Gigabytes) amounts of data.

      Yes, I believe that's how SETI@Home gets the raw data from the radio telescope to the data centre, they write it to tape then fly it.
  • Yes, there are plenty of systems available for distributed computing tasks.

    Yes, there are plenty of free CPU cycles.

    But bear in mind that the rate of growth of entrants to the Internet is not growing exponentially as it once was; if anything, current trends are flattening.

    Whereas, if both science and industry fully embrace this mode of problem solving in the next few years, one has to wonder how many aps will it take to render it ineffectual? (1,000? 10,000?) or will we be able to go to this well fo

  • One dollar. (Score:5, Funny)

    by Anonymous Coward on Monday July 07, 2003 @03:51PM (#6385115)
    Wow, what a world. $1 will now buy:
    1 GB sent over the WAN
    10 Tops tera-CPU instructions
    8 hours of cpu time
    1 GB disk space
    10 M database accesses
    10 TB of disk bandwidth
    1 large beverage
    1 of everything in the $1 store
    1 unlimited phonecall from some 10-10-### phone company.
    5 packets of cool aid
    10 packets of generic cool aid
    2 cans of coke

    When I was a child, data was expensive, and food was cheap...
  • This is an old maxim of design of any multi-tiered system. The reason is this: computation is largely about selecting and filtering data, before sending the results on to further tiers. This selection and filtering process requires many times more bandwidth towards the data source than it does towards the client layers.
    This only stops being true when there is no significant data, i.e. when the computation creates the data, as in the author's examples of render farms.
  • Anybody tried it? (Score:5, Interesting)

    by LinuxParanoid ( 64467 ) on Monday July 07, 2003 @04:01PM (#6385227) Homepage Journal
    It's a nice piece of analysis. Someone could have done it 8 years ago when Java came out; the facts are not significantly different (The values are different of course but the ratios involved are pretty similar. I did some thinking along these lines back then, and then in 2000 when considering working for a "hot P2P company" that an old acquaintance of mine was running.)

    My thinking went something like this: There are only a few, "niche" applications which need more compute power and which people pay for (distributed rendering, CFD, FEA, maybe a couple others). Maybe you could build that into a 10-30 million dollar business if you overcame a zillion obstacles but it didn't look like a billion or multi-billion dollar business. The applications for which people buy beefy servers, and which have a monetary payback, are mostly database applications. For those, you need to move the entire database near to the number-crunching PC, and that's not really feasible due to the cost of transporting Gigabytes of data or the unlikelihood that the PC's hard disk can store all the giga/terabytes of information potentially relevant for the computation. Not to mention the security problem.

    And Jim Gray's analysis lays out in more precise economic terms why it doesn't make sense. I like how he even calculated the relative merits of a Beowulf-like cluster of PCs versus P2P which I never really analyzed (I lumped them together as basically similar.)

    That said, has anybody even made a stab at designing or implementing a relational database with a P2P architecture? I know that there's Oracle Cluster Server, but I'm thinking of something more low-end and more distributed.

    --LP
    • Re:Anybody tried it? (Score:4, Interesting)

      by gillbates ( 106458 ) on Monday July 07, 2003 @05:19PM (#6386080) Homepage Journal
      That said, has anybody even made a stab at designing or implementing a relational database with a P2P architecture

      Actually, I'm working on something similar for a customer of mine. The real challenge lies in the solutions to the following problems:

      • What happens when a node 'disappears' from the network? The traditional approach is to use a redundant backup, but doing so increases bandwidth usage. By the time you get around to a triple-redundant system, you're effectively working with only a third of your network's bandwidth.
      • As the number of nodes grows, the likelihood that two or more machines will need data not cached locally increases. As the system grows, there comes a point at which the entire system effectively becomes constrained by the slowest responding machine in the entire network. (For example, if I have 2 machines, half of the database will reside on the other machine, meaning that the local cache contains half of the result set, and half travels over the network. However, if I have 10 machines, each machine has only 1/10 of the database, meaning that 90% of the result set must travel over the network. To make matters worse, if each machine is a client as well, then in a 10 machine cluster, the average client will spend 9/10 database io cycles fulfilling requests for other clients, and only 1/10 io cycles performing its own queries.)
      At this point, I'm literally betting my career on solving the above problems. Network bandwidth is the real constraint; I'm currently working on ways to reduce the amount I have to send over the network. (for example, I'm considering an adaptive-locking strategy where a record would be marked for update on the remote server, but the transactions would be 'bundled' and sent across the network in aggregate to reduce network latency.
      • Its quite simple really. consider the 2 db scenario (as more nodes are just more-of-the-same).

        You hold the entire db on each server, all writes are performed to both servers, reads take place on either of them. That doesn't really reduce the network traffic as you still are spending as much as you would with a 1-db system, but now you have reundancy and faster processing (ie reads).

        You hold the entire database spread over the 2 nodes, in which case you have problems if 1 node fails. In this case you reall
        • You hold the entire db on each server...

          Which is the strategy I'm employing right now. I would, however, like to be able to use this same db design for situations in which the entire db is too big to fit on a single server. I really don't see any point in writing another database if it doesn't overcome any significant limitations of the current offerings. To date, no one has come up with an effective distributed database, and hopefully I'll be able to change that.

          • good luck, but please bear in mind that the reason no-one has come up with an effective distributed DB is not through lack of customers, or trying, or research.

            You should remind yourself what problems you're trying to overcome - is it large database, faster processing, or faster networking? Each of those 3 requires a different solution.

            You could say the internet is a distributed database though, and Google is its index.
    • by JamieF ( 16832 )
      If you look at the way data works in a cluster, it's pretty clear that spreading it across a big slow network is a bad idea.

      In a DBMS, if all accesses are reads, you basically can just cache the data in every node of the cluster and it's ultra fast. If it's a lot of data, partition it across a large number of machines so that they each cache a subset of the whole database, and direct client hits to the appropriate node.

      The problem comes when you change something in an ACID compliant DBMS - you have to wri
    • Take a look at Mike Stonebraker's work [berkeley.edu].
    • Re:Anybody tried it? (Score:1, Informative)

      by Anonymous Coward
      No.

      But in principle, the reason you want to go for a P2P based DBMS is not really scalability, which we can do today with 'shared nothing' clusters (lots of Motherboards + local disk connected by Ethernet, rather than 'shared disk' clusters which are lots of diskless Motherboards connected to a SAN/NAS over the network). Rather, its system availability.

      With shared disk clusters you have a central point of failure/synchronization: the disk (or the disk controller). Today's shared nothing DBMSs all ado
  • Attack on IBM? (Score:3, Interesting)

    by Jacco de Leeuw ( 4646 ) on Monday July 07, 2003 @04:05PM (#6385275) Homepage
    Some companies, notably IBM, Salesforce.com, Oracle.com and others are touting outsourcing, or "On Demand Computing," as an innovative way to reduce costs. There are some successes, but many more failures.

    The recurrent theme of this analysis is that "On Demand" computing is only economical for very CPU-intensive (100,000 instructions per byte or a CPU-day per gigabyte of network traffic) applications.

    This must be considered an attack on IBM's fairly visible On-Demand Computing campaign.

    Beowulf clusters have completely different networking economics. [...] That is why rendering farms and BLAST search engines are routinely built using Beowulf clusters.

    This reminds me of those Microsoft-funded TCO reports. They concede that Linux has cost advantages in a very specific field (webhosting; Beowulf clusters), because anyone intuitively knows it's true. For all the rest: use Microsoft stuff. That's what they are saying.

    • Re:Attack on IBM? (Score:2, Insightful)

      by pcause ( 209643 )
      No it isn't an attack on IBM or anyone else. Grey knows what he is talking about and his analysis is just fine. We all need to get past marketing hype and commercials and excitement about "the next big thing" and look at the reality of the numbers. The issue is: are we close to having the infrastructure for generalized "on demand computing"? Grey explains it so what anyone can understand the tradeoffs. Even your CFO, which is the key!

      It is a great article/analysis. Believe it and ignore the hype. SOm
    • I believe he mis-understands what "on-demand" is all about. He is interpreting it as merely an "enterprise" version of SETI. "On-demand" involves a number of things, including instant provisioning of massive amounts of storage out of a central pool (in the same or multiple locations,depending on requirements), instant steps in CPU and I/O bandwidth, additional server nodes, etc. The SETI style of distributed computing indeed has very limited applications. Being able to quickly increase the size of your
  • Another data point (Score:5, Insightful)

    by Alomex ( 148003 ) on Monday July 07, 2003 @04:15PM (#6385366) Homepage
    A few years back when Grid computing was all the rage we sat down with some investment partners and worked out the figures. We came pretty much to the same conclusion. The "average" commercial supercomputing application (pharma, oil drilling, simulation) would not benefit from "free" cycles on the network.

    Essentially, any commercial computation valuable enough to require that amount of effort can justify purchasing a hundred thousand node beowulf cluster and run locally. The reduction in network costs, the advantages of total control and tight security more than pay for the difference in computing cost.

    Non-commercial computations such as SETI will benefit from grid computing, and we expect to see more efforts long those lines (RSA, Mersenne, Stanford DNA). But remember, we were thinking about starting a business, and none of those pay for the services, so we moved on.

  • When SETI@Home spent $10^6 to get everyone to spend $10^8 on electricity alone, how was that a good deal? Have extraterrestrials sent a message that they're about to touch down with a vaccine for AIDS, a formula for cold fusion, a permanent end to unemployment, a sure-fire way to get good representation in government? Could we have spent the money more wisely, Jim?

    If Bill paid you folks to do something more than get technically-challenged investors excited, perhaps our software would work better. (And ASN.
  • His reasoning sounds good, but what the hey? It sounds pretty obvious that the most cost effective approach is to keep the data close to the CPU doing the crunching.
  • Computing costs hundreds of billions of dollars per year.
    IBM, HP, Dell, Unisys, NEC, and Sun each sell billions of dollars of computers each year. Software companies like Microsoft, IBM, Oracle, and Computer Associates sell billions of dollars of software per year. Computing is obviously not free

    So what happens to free software? Looks like either the money stays in the corporations by them creating software, paying developers, and billing clients; or the money is in the populous' Bank account so the free

  • by t482 ( 193197 ) on Monday July 07, 2003 @04:33PM (#6385579) Homepage
    Total Cost of Ownership (TCO) is more than a trillion dollars per year.
    Operations costs far exceed capital costs. Hardware and software are minor parts of the total cost of ownership.
    -- microsoft software is cheap so you should keep buying it. Even if administering it is expensive.

    Megaservices like Yahoo!, Google,et all have relatively low operations staff costs.
    -- Open source if managed properly doesn't need many people. But this formula can't be applied to the propreitary software shit you buy.

    Most applications do not benefit from megaservice economies of scale
    --Most microsoft products. We will still take our chunk of flesh no matter what.

    Outsourcing has often proved to be a shell game - moving costs from one place to another.
    --having a third party vendor manage your Microsoft software for you isn't going to save you much.

    Web services reduce the costs of publishing and receiving information.
    --But you will need a huge support staff to manage it plus lots of licenses. (see above)

    Most Web and data processing applications are network or state intensive and are not economically viable as mobile applications.
    --especially once the MS licensing is thrown in.

  • by gatkinso ( 15975 )
    ...because people like me are willing to donate their computers time and a part of their [and their employers, hawhaw] electric bill.

    I do so because I am interested in the project... not because I feel like I want to help cut someone's computing cost. If SAH was a for profit enterprise my interest would quickly evaporate.

  • If it can't be done on an abacus by an infinite number of monkeys, then it can't be done. It is the simple monkey principal, similar to the duct tape principal.
  • by warriorpostman ( 648010 ) on Monday July 07, 2003 @05:18PM (#6386063) Homepage
    ...but, there's other programs that people might find more socially useful/productive than SETI.

    How 'bout...this from United Devices [grid.org]? They do a variety of biologically related projects, the most popular one, as far as I can tell, being cancer research...I've been running it for almost 2 years, and I have 100,000 points...how many points do you have?
  • Google (Score:4, Interesting)

    by The Creator ( 4611 ) on Monday July 07, 2003 @05:22PM (#6386108) Homepage Journal
    For example, in 2002 Google had an operations staff of 25 who managed its two petabyte database and 10,000 servers spread across several sites.


    And what OS where they using? :)

  • Massive grid computing currently isn't economical for crunching the daily workload of insurance claims... not that it ever occurred to any of us that it would be, but it's nice to hear it from the world's leading expert on TP.
  • This is a good analysis of the hardware side of the cost/benefit analysis of distributed computing, but that's nowhere near the full story.

    For example, per the thesis of this article (that network communication is the largest expense of distributed computing), the Salesforce.com model isn't valid. Yet they're a great success story of computational outsourcing. Huh?

    The key, I think, is that outsourcing eliminates distractions, and gets your employees back to working at your company's core competency.

  • It would be better if the post also mention that Jim Gray is a Turing award owner.
  • One cost of distributed computing is picking the wrong donation...

    E.g. If you own a computer, you are most likely a person. People get old and die. You should do medical distributed computing projects since you might benefit from the findings. Looking for aliens is cool and all, but not much in the way of practical application (unless they are beaming encoded drawings of really cool devices).

    On the third hand, if you are a stranded "traveler" (not the Irish kind)... Maybe puting out a "beam me up" signa

  • a Beowulf-class MPI job that simulates
    crack propagation in a mechanical object
    Arrest them at once!

    Tierce
  • This pidgeon-holing of distributed computing (ie. Grid) technologies is flawed thinking - and it would be a worry if IBM and Microsoft have built their "the future is web-services" strategies solely on this type of analysis. What the analysis forgets is that you have to get the data INTO the database before you can compute with it. For many processing tasks, it makes more sense to send the bulk of processing to the data rather than the other way round. The barriers to achieving this aren't in the economie

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...