Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Cloud Businesses Data Storage IT

Mega-Uploads: The Cloud's Unspoken Hurdle 134

First time accepted submitter n7ytd writes "The Register has a piece today about overcoming one of the biggest challenges to migrating to cloud-based storage: how to get all that data onto the service provider's disks. With all of the enterprisey interweb solutions available, the oldest answer is still the right one: ship them your disks. Remember: 'Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.'"
This discussion has been archived. No new comments can be posted.

Mega-Uploads: The Cloud's Unspoken Hurdle

Comments Filter:
  • Backups (Score:5, Informative)

    by SJHillman ( 1966756 ) on Monday May 21, 2012 @04:37PM (#40069227)

    My last employer offered offsite backups to clients. For the initial seed, we always tried to get them to put it on an external HDD and ship it to us (or at least DVDs). The only major exceptions were clients that were also on FiOS - that was the only case where over-the-net transfer was faster than the backup-and-ship-it method for the initial seed.

  • by Anonymous Coward on Monday May 21, 2012 @05:07PM (#40069579)

    Not really.
    As a consumer I get 1Gbps/100Mbps for roughly 130 USD.
    If I settle for 100/100, we're talking 50 USD.
    250/100 is around 70 USD.

    Per month.

    Of course, this is consumer gear, so I'm only *guaranteed* 60% of the upstream.

    Then again, if you're doing pro photography at 80MP, odds are you're doing it as a business, and should have little to no problem forking out the ~$800 a month a gigabit pipe would run you.

    Oh wait, you live in the land of the brave and free, home of the 512Kbps broadband?
    Sucks to be you.

  • by BradleyUffner ( 103496 ) on Monday May 21, 2012 @05:10PM (#40069605) Homepage

    I have never liked the station wagon analogy, because it misunderstands the thing we are trying to measure. In the example, we measure the bandwidth of the station wagon. But that's like measuring the bandwidth of a packet -- a nonsense concept. We measure the bandwidth of the channel, not the chunks of data which fly through it. To really get the right analogy, we should talk about the bandwidth of a freeway, not the station wagon which drives upon the freeway.

    Bandwidth in the colloquial sense means "the amount of data which passes a given point, per second." So, imagine that you can load 25 TB in the form of tapes into a station wagon. For safety, these station wagons must drive a distance of 75 meters apart and a speed of 100 kilometers per hour. That means that one station wagon passes a given point every 2.7 seconds. That's 9.2 TB per second. Adding a second lane to the highway would double the bandwidth.

    The stupid calculation which is often performed, on the other hand goes like this. You have 25 TB in the wagon, and you drive it to a location 10 hours away... Already you've gone off the tracks, because you are mentioning the TIME it takes to get to the destination, i.e. the LATENCY. And as anybody knows, the latency (or equivalently the distance between the points) has NOTHING to do with bandwidth.

    How can you say Time has nothing to do with bandwidth when, in your own example, you measured it in TB per SECOND?

    Following your example again of 9.2TB/sec, that can be changed to 9.2TB * 60 /min, or 9.2TB * 60 * 60 /hour, or 9.2TB * 60 * 60 * 10 / 10 hours, which is the exact measurement that you seem to have a problem with earlier in your post (data in a 10 hour period).

  • The tiny town of Sebastopol CA, population ~7800 has gigabit fiber to the (some) doorstep for $69/month.

  • Re:The real hurdle (Score:5, Informative)

    by CAIMLAS ( 41445 ) on Monday May 21, 2012 @07:35PM (#40071149)

    That is just one of many of the hurdles.

    Really, these problems are problems because most 'cloud' shit is done wrong.

    It's a bit of a worn out record here on Slashdot, but anyone or any company which is fully dependent upon The Cloud for business continuity is a fool.

    * First off, there is no such thing as 'utility computing', and probably never will be due to the volatile nature of storage and its ongoing cost of maintenance.
    * Second, if you do not maintain primary physical control of something, to the best of your ability, you do not control it.
    * For primary IT infrastructure, it will cost more to do "Cloud" than local. If you can afford 2-3 servers a year, but not much more, and a nominal IT operations budget, chances are you should have an in-house "cloud" with off-site replication.
    * Bandwidth costs both ways will kill you, as will latency in many cases, will kill Cloud functionality.

    At this point, I still strongly recommend against public Clouding your systems unless they are:

    a) (very!) low volume with use-based billing. This only makes sense for a low-volume public-facing site where you don't already have IT infrastructure (on a cost basis)
    b) off-site 'hot' replication. You've got your inside 'private Cloud' which replicates to off-site systems. (Cloud is basically just colocated virtualization, after all.)
    c) Other geographic/distribution requirements (eg. multisite organization with none serving as a good central hub). In this case, colocation of your own equipment makes more sense in many regards.

If all else fails, lower your standards.

Working...