Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage IBM IT

IBM Launches New Product Line 222

An anonymous reader notes that "IBM has launched its new product line of storage devices: the DS6000 and the DS8000. The results are quite impressive, with the DS6000 being rack mountable, 3U, and ONLY 125 pound storage device that will hold up to 67.2 TB! The DS8000 is equally impressive, with 6x performance of ESS 800 (Shark), making it the most powerful storage system to date. "
This discussion has been archived. No new comments can be posted.

IBM Launches New Product Line

Comments Filter:
  • by bushboy ( 112290 ) <lttc@lefthandedmonkeys.org> on Wednesday October 13, 2004 @01:44AM (#10511365) Homepage
    download the whole internet !
  • DS? (Score:5, Funny)

    by Anonymous Coward on Wednesday October 13, 2004 @01:44AM (#10511367)
    Does that stand for *cough* DeathStar, er *cough* I mean DeskStar hard drives?
  • Huh? (Score:5, Funny)

    by bigberk ( 547360 ) <bigberk@users.pc9.org> on Wednesday October 13, 2004 @01:47AM (#10511375)
    IBM has launched its new product line of storage devices
    What's that?? I can't hear you over my screeching Deskstar 75 GXP [goldengate.net]!!
  • To inform (Score:4, Informative)

    by a.different.perspect ( 817184 ) on Wednesday October 13, 2004 @01:47AM (#10511377) Journal
    More [google.com] articles [byteandswitch.com], for the more [arnnet.com.au] article [infoworld.com] inclined [internetnews.com].
  • by Anonymous Coward on Wednesday October 13, 2004 @01:52AM (#10511397)
    It's 67.2 TB if you have 14 racks (224 disks)...a single rack only allows 300Gb x 16drives = 4.8 TB...quite still a lot though.
  • Writeup is wrong (Score:5, Informative)

    by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Wednesday October 13, 2004 @01:54AM (#10511407)
    The DS6000 supports up to to 67.2TB, but not in one enclosure. The DS6000 only fits 16 disks per enclosure, and with 400GB disks that is 6.4TB. 400GB disks seem to only be available as SATA and PATA, the largest SCSI disks I could find are 300GB. That means 4.8TB per enclosure. 16 DS6000's per 48U rack, that's 76.8TB. Remove every 8th disk for RAID-5, that's 67.2TB.
    • Re:Writeup is wrong (Score:5, Interesting)

      by OrangeTide ( 124937 ) on Wednesday October 13, 2004 @02:06AM (#10511463) Homepage Journal
      8 disk RAID-5? You have a lot more guts than I do!

      Maybe raid5+1 or maybe four 4-disk raid5s stuck together in an append or raid0. Or maybe raid6, if anyone ever releases a product that makes it easier to manage.
      • RAID 5+1? You have a lot more time and money than I do!

        RAID 10 I can see if you really need high redundancy/availability, but 5+1 is just way too slow and too disk-hungry for any practical use. (what company or person wants to buy 16 disks for every 7 disks worth of storage they get to actually use?) For most uses, RAID 5 or RAID 3 do the job very nicely, providing decent redundancy without trading off too much space and performance. And yes, I mean 8-disk RAID 5.

        RAID 6 is like RAID 5+1, but not as bad -
        • RAID 10 I can see if you really need high redundancy/availability, but 5+1 is just way too slow and too disk-hungry for any practical use. (what company or person wants to buy 16 disks for every 7 disks worth of storage they get to actually use?)

          Lots of different terminilogy here.. when talking about 5+1 in a RAID5 setting, it's often a short way of saying "5 data disks +1 parity disk" for each set. 5+1 is a common configuration for write intensive tasks, since parity will be a limiting a factor. For

          • I have never seen the term RAID 5+1 used the way you describe, and in the context of the post I was responding to, it doesn't seem to make sense. I'll quote:

            "8 disk RAID-5? You have a lot more guts than I do!

            Maybe raid5+1 or maybe four 4-disk raid5s stuck together in an append or raid0.
            "

            Note that the poster uses the term "8 disk RAID-5" to refer to a RAID 5 setup where one out of every 8 disks worth of space is dedicated to parity; then he uses "4-disk raid5s" to refer to 4disk arrays with one drive's w
          • OK just re-read your comment. I knew it had to make more sense than I originally thought. And it did.

            "when talking about 5+1 in a RAID5 setting, it's often a short way of saying "5 data disks +1 parity disk" for each set."

            I have seen this usage, and it does make sense, but not in the context - and it would be very sloppy to say it as RAID5+1, which is what the poster I replied to said, and which is how I misread your post earlier.

            Sorry about the misunderstanding.
        • RAID 6 is like RAID 5+1, but not as bad - poor performance (compared to RAID 5 - twice the parity to calculate) [...]

          It's not the parity calculations that are the bottleneck for RAID5, it's all the additional I/O required.

        • Most often Raid5+1 means a Raid5 Array +1 hot spare (to minimize critical array time in case of a disc failure)
      • Re:Writeup is wrong (Score:5, Informative)

        by keesh ( 202812 ) * on Wednesday October 13, 2004 @04:19AM (#10511846) Homepage
        IBM's standard is 6+P+S (six normal, one parity, one spare). Since the monitoring setup is damned good, and the CEs are really fast in replacing drives, it seems to work. The only reason raid 6 exists at all is because EMC accidentally shipped a bunch of duff drives once.
      • 8 disk RAID-5? You have a lot more guts than I do!

        Maybe raid5+1 or maybe four 4-disk raid5s stuck together in an append or raid0. Or maybe raid6, if anyone ever releases a product that makes it easier to manage.


        We have (counting on fingers) 8 storage array cabinets very similar to IBM's - they're Infortrend devices. All of these units are either 12 disk or 16 disk devices. And we break them in half. So, we have one global spare for each cabinet, and an array of:

        6 RAID-5 and 5 RAID-5 drives (1 spare)
  • by wombatmobile ( 623057 ) on Wednesday October 13, 2004 @01:54AM (#10511408)

    "These are the most significant storage announcements we have made in more than a decade. IBM is focused on being the storage innovator and clear technology leader," said Dan Colby, General Manager, IBM Storage Systems. "Today, we are delivering new economics and choice by leveraging common components, breakthrough technologies from mainframes and supercomputers, and unmatched virtualization and management capabilities."

    Most significant in a decade? New economics? Wow, this is too important for Slashdot. Somebody should call Time magazine. Or Newsweek.

  • by mrjb ( 547783 ) on Wednesday October 13, 2004 @01:54AM (#10511410)
    At that price I'll have one.
  • not 67Tb in 3U (Score:3, Informative)

    by swmike ( 139450 ) on Wednesday October 13, 2004 @01:55AM (#10511413)
    It'll grow by the modular 3U unit.

    The single 3U unit won't hold 67.2Tb, that's a bunch of them linked together.
  • by jaephu ( 821711 ) on Wednesday October 13, 2004 @01:59AM (#10511429)
    uh oh... Microsoft Windows Longhorn Minimum System Requirements: ... Hard Drive: 30TB Memory: 2 GB
  • Comment removed (Score:4, Informative)

    by account_deleted ( 4530225 ) on Wednesday October 13, 2004 @02:00AM (#10511431)
    Comment removed based on user account deletion
  • by just someone ( 13587 ) on Wednesday October 13, 2004 @02:00AM (#10511437)
    Product pricing and availability
    IBM's new storage offerings with enterprise class functions reset the bar with minimum configurations starting at half a terabyte and list prices starting as low as $97,000. The DS6000 series and the DS8000 series come standard with a four-year warranty on hardware and software, which is unique in the industry.


    What are they smoking? 9.7 k a terrabyte, maybe. 97k. Even EMC is not that high any more.
    • by Anonymous Coward
      Yes, these prices are ridiculous.
      You can get 5.6 TB for $10k in true 3U using VTrak 15100 from Promise.
      That's $4k for VTrak plus 15 x $400 for 400GB drives.
    • I think the target here is ultra-dense high capacity storage users. It has to be, though I can't imagine there are that many customers in this space.

      Certainly for lower capacities (e.g. 10 TB), there are much, much cheaper turnkey solutions in the form of the XServe RAID from Apple, not to mention the stuff Promise sells.

      • There are plenty of customers in this space, government, financial and ISPs to name a few, look at products like nixon, (whatever NFR's flight recorder is called today) and other products that store every single piece of data that goes in and out of a network. I work for a civilain gov agency, we generate 2TB of data a day, store that for 10 years, something like this becomes usefull, although I suspect in our case, a larger SAN would be much more efficiant. Banks, same way, they need to store huge amount
    • What you're paying for in the $97k base configuration is the chassis with a few disks. That chassis may seem expensive, but considering the kind of redundancy built in (dunno if $97k is the fully redundant system with dual caches, dual FC switches, dual PowerPC processors, dual power supplies, you get the picture), it's a pretty low price.

      Remember, when looking at stuff like this, you're really looking at the price of the chassis as the base of a much larger system, not a "I need half a terabyte, what sho

    • Folks, the 97k is a base unit, with dual, bulletproof controllers, and the minimum amount of storage. Doubling the storage does not double the price.

      SirWired
  • by OrangeTide ( 124937 ) on Wednesday October 13, 2004 @02:03AM (#10511446) Homepage Journal
    I dunno. 67Tb in thirteen 3U 16 drive units doesn't sound all that impressive. Maybe if you could fit 100Tb in 50U of space I would be impressed. If this could even scale that high you could only fit 80Tb in that amount of space. 3U for 4.8Tb of raw storage is not a big deal. You can build your own low quality system with that kind of capacity yourself out of cheap disks. Obviously not with the same performance though.

    Although I will admit that this is a very fast product with decent redundancy. Although I generally believe dealing with redundancy at a higher level with software is much more flexible than controller level redundancy. And cheaper.

    Fibrechannel drives sound neat and all, but if someone can fit 3x as many "lower end" drives in the same amount of space that's lower cost, higher redundancy, higher capacity and higher performance. I'm sure they are good for something though, else IBM wouldn't have such a sales drive behind them. *snicker*
    • It's really about the performance. Your correct in that anyone can build a system with the equivilent storage space cheaper.
      I have a few TB of storage on my own network, and it's great for archving stuff, but it would be crap for trying to use this for storage on a high load server, that's the situation that these will be useful in, a good amount of storage with good performance, from a well known vendor. Especially for cases where a business already owns an IBM server, and want to ensure compatibility a
  • by Temporal ( 96070 ) on Wednesday October 13, 2004 @02:03AM (#10511447) Journal
    If only there were some sort of visual stimuli -- say, something which appeals to our most basic primal instincts -- which could be stored on such a device, and subsequently accessed whenever one is bored and no one is watching. Alas, I am unable to imagine anything suitable. Perhaps one of my fellow Slashdotters has an idea?
    • I can see it now: 4 Ask Slashdots in the next week: "I have 67.5TB of online storage, what do you guys use to back up all that data?"
    • by zoeblade ( 600058 ) on Wednesday October 13, 2004 @03:29AM (#10511726) Homepage

      If only there were some sort of visual stimuli -- say, something which appeals to our most basic primal instincts -- which could be stored on such a device, and subsequently accessed whenever one is bored and no one is watching. Alas, I am unable to imagine anything suitable. Perhaps one of my fellow Slashdotters has an idea?

      Pictures of yummy doughnuts?

    • Please add a constraint such that said data must be fully accessible using only one hand for input devices while the other hand is otherwise occupied.
  • 67.2TB (Score:3, Funny)

    by TheRealStaunch ( 781450 ) <abryzak@gmail.com> on Wednesday October 13, 2004 @02:03AM (#10511449)
    67.2TB should be enough for anyone!
  • ... are designed to deliver a generation-skipping leap ...

    They even know how to make use of proper wording. No "quantum" here (presumably because IBM has some background on the real thing).

    CC.
  • Expensive logo? (Score:5, Interesting)

    by Doc Ruby ( 173196 ) on Wednesday October 13, 2004 @02:05AM (#10511455) Homepage Journal
    Actually, the DS8000 is marketed as expandable up to 192TB. Since it's marketed as starting at 580GB, and priced starting at $97K, that's about $167:GB. Considering that a single 160GB drive, without redundancy, integrated POWER uC and other server hardware, IBM support or management software costs about $0.50:GB, and probably less in quantities of 1200 (==192TB), are those extras worth it compared to rolling your own RAID?
    • Re:Expensive logo? (Score:5, Interesting)

      by Doc Ruby ( 173196 ) on Wednesday October 13, 2004 @02:15AM (#10511503) Homepage Journal
      OTOH, the DS6000 takes 300GB SCSI drives [slashdot.org]. 192TB is 640 300GB drives, which cost at least $197 in EIDE, for a total of $126K. While SCSI drives cost well over $500ea at 300GB, though about $1:GB at 147GB. If a loaded DS6000 costs anywhere near $325K, IBM really has great prices at the high end. That's about 768K FLAC'ed CDs, which would cost $15.4M to buy at $20ea.
    • Re:Expensive logo? (Score:2, Insightful)

      by roror ( 767312 )
      yes, especially because, "No one gets fired for choosing IBM" while if you build your own RAID, you might get.
    • Re:Expensive logo? (Score:5, Insightful)

      by guacamole ( 24270 ) on Wednesday October 13, 2004 @02:45AM (#10511604)
      When I hear someone suggest to roll your own anything, I want to scream and run as they probably haven't worked a day in a real production environment. I'd like to see you roll your own, manage, and support a multiterabyte storage system and then decide by yourself whether it was worth it or not (assuming you're lucky and get a chance to do so, after not being fired because something has gone wrong and ate your data or caused downtime)

      As for this particular case, this system was obviously designed to efficiently manage vast amounts of storage. It is not worth buying if you only need a 580GB of storage. Besides, no one pays the list price in the enterprise storage market. No one also buys IBM's enterprise hardware just because they think they need the hardware alone.

      • Google [rochester.edu].
        • Google have a hand-rolled filesystem and hardware, supporting several petabytes of data. (Exactly how many is anyones guess, but reckon on tens of thousands of servers × hundreds of gigabytes each.)

          Apparently they're pretty happy with it.
          • f#ck google (Score:3, Insightful)

            by Raindeer ( 104129 )
            No matter how much google stores, it is not the one to look at when you're talking corporate data storage. Corporate datastorage is about storing all the data of all your oil fields, in a way you're sure you don't loose it. It is about storing every single product that you make in a database, complete with tracking of location and which customer bought it. It is about all those things Google doesn't do and doesn't care about. I am willing to bet that for its financial system Google is using similar to the o
            • make love to google (Score:3, Informative)

              by boots@work ( 17305 )
              If you actually read the link, you'd see there is at least as much redundancy designed into the system than in most NAS systems, and it has been very reliable to date. You are familiar with the idea of RAID, aren't you -- Redundant Arrays of Inexpensive Disks? This is the same approach as in the IBM hardware, but at a much higher level.

              For example they maintain integrity checks of every block, to catch silent corruption. This is not done by many competing systems -- it is a major selling point of Sun ZF
              • If I'm not mistaken there is a filter on certain words here... So I bleep stuff out. :-)

                Loose data is what you get when English is ones third language, typing is quick and thinking is slow. It is called a spelling error.
                • you are mistaken... fuck fuck fuck fuck fuck.

                  It's because the OP who F#CK'd google wasn't ready to go the final mile and write FUCK google, for whatever reasons he has in mind, which is what the GP was pondering about.

            • You forgot DNA sequencing. The big data fast data storage market right now is in genetic work, which trivially blows away any typical corporate industrial database application.
              • ha, I think I can top that when it comes to research. Look up a project called LOFAR. It is a distributed sensor array for astronomy, basically creatin a huge radio observatory of a couple of hundred kilometers in The Netherlands. The amounts of data those guys talk about is measured in tens of gigabits per second, continuously for years and years and years. Big Blue is building the supercomputer for that.

                But your right DNA sequencing, Biotech, but also Medical Imgaging demands huge huge amounts of storage
              • And we have no problem spending tens and hundreds of thousands on storage arrays that we know Will. Not. Fail.
                COTS doesn't cut it when you have data worth billions, guys.
            • "Corporate datastorage is about storing all the data of all your oil fields, in a way you're sure you don't loose it. It is about storing every single product that you make in a database, complete with tracking of location and which customer bought it."

              Actually, we use Word documents for that. Lots and lots of word documents. On Windows servers with blank administrator passwords.

              Don't you just love the corporate IT myth of everything being neat and organised, and running suitable good-quality software?
        • Re:One word (Score:3, Informative)

          by Tony-A ( 29931 )
          Google is a completely different animal.

          Google itself is ultra reliable so long as most everything is working kinda sorta well. Something breaks and Google just researches the web, which it was going to do anyway. Google can function perfectly well with lots of its components broken. Almost nobody else can.
      • When you've got a $100K+ budget for a rackmounted box for storage, in addition to the salaries of its administrators and people to fill it, you can "roll your own" pretty well. Especially when you're in whatever cutting edge biz generates 192TB. You're probably building lots of your own custom apps, rolling your own TCP/IP (or other) network. In a few months we might hear about IBM's customer list for this new product. Those customer's competitors are probably rolling their own. Several months after that we
      • Everyone wants to sell me a solution. Solutions cost more, produce more profit, and tie me to the vendor. In most cases, I don't need or want their solution; I just want and need their product.

        But there are times I want solutions, and solutions cost more. They come with uptime, top notch support, etc. When there's a problem, they often know it before we do, and notify us how and when it wil be solved.

        For our compute farm and desktops we buy products. For our networked mass storage, we buy solutions.
    • Yes. Yes, it is.

      Now go back to your parents' basement, and pretend you have a clue about enterprise environments. In the meantime, read these hints.

      SO YOU WANT TO MAKE YOUR OWN ENTERPRISE RAID SYSTEM
      (from 'clues for the terminally clueless')

      First of all, get your hardware right. SATA doesn't cut it in the real world. Modern SCSI is acceptable for small to medium systems, but large-scale is FCAL all the way, and that costs money. Count on $1800/146GB drive, or $12.50/GB. Add cabinets ($10,000/14-disks or
      • When you talk about SATA, SCSI and FCAL, are you meaning the interface or the drives themselves?

        There are numerous cheaper products out there that stuff PATA or SATA disks into a rack chassis and provide a SCSI and/or Fiber interface to the collection as a single logical drive.

        On the other hand I have seen a fair amount of information on speed and reliability of SCSI drives versus IDE drives, and it all points to SCSI being a better choice.

        I'd imaging that a "cheaper" solution that uses PATA/IDE disks

        • The trade off has been shifting. I haven't seen any FC-AL or SCSI hard drives over 80 Gig actually in any machines yet, but SATA drives of 400 Gig are commodity items you can buy at your local computer store. At that price, it's a lot cheaper to have a lot of redundancy in cheap hardware than run the higher speed SCSI/FC-AL. And if you've ever tried to wire SCSI inside a tightly laid out rackmount box, you'll see where both SATA and FC-AL are huge advantages in wiring and cooling. But FC-AL is grossly more
      • Your comments about enterprise gear are worth making. And that's a nice compliment to mistake the Manhattan skyscrapers of the giant, global banks and publishing companies where my network architectures run "my parents' basement". Would you ask Mom to turn down those Wall Street squawkboxes, an maybe fry up another $200M fixed-income coupon issue? There's a game on the compliance workflow archive server, and Rover has gotten down into the raised floor again.
  • by Anonymous Coward
    Ok, call me crazy, but there's one thing I want to know: how many RAID sets, and how many partitions, can this thing handle? Right now, where I work, we have what IBM used to call a Fast T storage array (we're busy rolling it out at the moment). One thing which may impact in the future is the limitation on the number of partitions that the controllers can handle (aka LUNs); IIRC, it's 64 (might be 128, I can't remember for sure.)

    I've also had experience with FC setups which have a limit on the number of R

    • Just a quick one - Partitions != LUNs.

      A LUN (logical unit number) is specific to the host and is effectively the physical disk number. The number of LUNs supported is very much dependant on the OS (Windows/Solaris support 256 and Linux supports 128 due to RDAC limitations currently).

      Storage partitioning is configured at the storage device level and is a logical grouping of logical drives, host groups and hosts to control access and improve performance. The number of partitions supported depends on how m
    • by Pinback ( 80041 )
      The FastT subsystem is remarketed LSI aka Symbios aka NCR. (Yes, as in the old SCSI card maker NCR.) FastT has a really cheap heritage, and only supports active/passive failover like other low end products like the EMC Clariion line.

      For a better feel for the DS line, you have to look at the feature set of the ESS (shark) line.

      The sharks have two pSeries boxes in them that act as an intermediary between the FC (fabric) host-adapters on the front end, and the IBM SSA disk loops and trays on the back end. Th
  • Backup 4TB? (Score:4, Interesting)

    by soundman32 ( 147936 ) on Wednesday October 13, 2004 @02:58AM (#10511643) Homepage
    And how the eckle feckin do we back that baby up?
    • punch cards. lots and lots of punch cards...
    • I'm sure IBM would be happy to sell you more than one.
    • Re:Backup 4TB? (Score:4, Informative)

      by keesh ( 202812 ) * on Wednesday October 13, 2004 @04:21AM (#10511854) Homepage
      LTO2 tapes are 200GBytes each... Remember that these boxes can flashcopy (instantly do a complete copy of your data, kinda like LVM snapshot support but actually working and a hell of a lot more powerful, oh, and done in hardware), so you don't need to stop your database whilst you're waiting to write it to tape.
    • Re:Backup 4TB? (Score:3, Interesting)

      by rhaig ( 24891 )
      backing up 4TB is almost trivial, I handle almost 5 now. The more interesting case would be the full 67.2 TB

      ok, so if you want 12 weeks of retention, and do nightly incrementals and weekly fulls, 67.2 TB would require about 952TB of tape capacity.

      (figure 3% daily change * 6 incrementals/week + 1 full/week * 12 weeks, so 14.16*disk=tape)

      so if we round up to 1PB of tape, and let's assume LTO2, with 320GB/tape (about the numbers I'm seeing for binary data), that's roughly 3200 tape slots + 260/week for off
  • by jtharpla ( 531787 ) <{ten.kradllams} {ta} {prahtj}> on Wednesday October 13, 2004 @03:03AM (#10511662) Homepage
    We've been getting disk arrays like the DS6000 for months now... for example:

    RocketSTOR R2221 [zzyzx.com]
    or
    Silicon Mechanics SM-316RX [siliconmechanics.com]
  • The 3U box can only hold 16pcs of 300G disks max, that is about 5T. The 67T can only be achieved with additional boxes of course. v
  • The enterprise storage system, which is available in either dual two processor or dual four processor configurations, includes an architecture that can address over 96 petabytes of data - or more than 4500 times the amount of information found in the Library of Congress.
  • Good: - Robust technology - Modular - IBM support Bad: - Expensive - Only 2 GB of cache (mirrored) - Slow, check out http://www.storageperformance.org/results
  • Sure it might be cool to jam 16 HDs into such a small space, but for about $100K, it sure seems a bit pricey. Couldn't you get maybe 2-4 of the most reliable PC cases you can get and stuff them with HDs.

    Or is there some super-important feature that I'm missing.
  • Next in the series, the DS9000!

    Bill, comp.lang.c

  • Shame (Score:3, Funny)

    by oojah ( 113006 ) on Wednesday October 13, 2004 @08:13AM (#10512756) Homepage

    As a Brit, I'm somewhat disappointed that the writeup meant the other "pound".

    The ONLY 125 pound storage device that will hold up to 67.2 TB!

    I don't really need 67.2TB of storage, but at £125 I would certainly have considered it. £1.86 per TB is not a bad price (US$3.33)

    Cheers,

    Roger
  • The picture of the ds8000 looked small, like something I could stick under my desk. Then I looked at the specs - it's 6 feet tall. It also has
    - 8 processors (power5)
    - 32-256 GB RAM
    - up to 640 disk drives
    - 4 port 2gig FC (anyone know where I can get a USB-to-FC adapter?)
    - weight: 2880 pounds (each expansion is 2400 additional)
    - 30,000 BTU/hr. Converted, that is 8800 watts, or 12 horsepower!
  • Using a single of these devices, you can get up to 6TB with 400GB disks. The 67TB figure is for multiple devices!

    The weight figure is a bit on the low side, yet not that far off, but again for multiple of these boxes and it essentialy reflects the weight of the disks plus some overhead.

"Yes, and I feel bad about rendering their useless carci into dogfood..." -- Badger comics

Working...