Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×
Data Storage

Xiotech Unveils Disruptive Storage Technology 145

Posted by kdawson
from the storage-that's-smarter-than-you-are dept.
Lxy writes "After Xiotech purchased Seagate's Advanced Storage Architecture division, rumors circulated around what they were planning to do with their next-generation SAN. Today at Storage Network World, Xiotech answered the question. The result is quite impressive, a SAN that can practically heal itself, as well as prevent common failures. There's already hype in the media, with much more to come. The official announcement is on Xiotech's site."
This discussion has been archived. No new comments can be posted.

Xiotech Unveils Disruptive Storage Technology

Comments Filter:
  • Okay, so at a brief glance it's looking like next-gen primary disk storage. I didn't see any mention of which RAID it is (although I'm thinking they're probably going RAID 10??? Maybe 6?). What's cool though (at least by my opinion) is that it's going to cut down on SAN errors through self-diagnosis. Interesting, will have to check through the white paper.
    • by maharg (182366) on Tuesday April 08, 2008 @01:56PM (#23003776) Homepage Journal
      not only self-diagnosis, but onboard disk remanufacturing ffs

      100 or so engineers involved in the project have replicated Seagate's own processes for drive telemetry monitoring and error detection -- and drive re-manufacturing -- in firmware on the Linux-based ISE. ISE automatically performs preventive and remedial processes. It can reset disks, power cycle disks, implement head-sparing operations, recalibrate and optimize servos and heads, perform reformats on operating drives, and rewrite entire media surfaces if needed. Everything that Seagate would do if you returned a drive for service.
    • by hackus (159037) on Tuesday April 08, 2008 @03:11PM (#23004650) Homepage
      Exactly what we do not need.

      Next generation hardware that is patent encumbered and will require a lawyer and several court proceedings for anyone and everyone to get their data back.

      I mean come on, when is the industry going to figure out we do not need proprietary, closed storage solutions that are a rehash of the old IBM AS/400 days when you could only buy super expensive IBM gear.

      No thanks I will take my open code and commodity hardware and build solutions that will kick this patented solutions arse at 1/100th the cost.

      Besides, if these features are really worth their salt the open source community will provide them sooner or later. Preferably in Europe where these silly patent claims that this product is so unique nobody could possibly figure it out, gimme a lot of money because I am brilliant.

      Not brilliant and not worth the cost in my opinion. (Both in restrictions due to the patents and infrastructure choices this product imposes and the cost in currency).

      -Hack

  • Unclarity (Score:2, Interesting)

    by Eudial (590661) on Tuesday April 08, 2008 @01:47PM (#23003674)
    The unclarity!

    These are just some of the questions popping into my head:
    What is SAN?
    What does it do?
    How is it disruptive?
    Who does it disrupt?
    What does it store?

    Can't say skimming through TFA makes it a lot clearer either.

    Also, two obscure articles is media buzz?
    • by Yetihehe (971185) on Tuesday April 08, 2008 @01:49PM (#23003692)
      Not every article is adressed to people which are not in field of storage...
    • Re:Unclarity (Score:5, Informative)

      by TubeSteak (669689) on Tuesday April 08, 2008 @01:51PM (#23003714) Journal

      What is SAN?
      What does it do?
      How is it disruptive?
      Who does it disrupt?
      What does it store?
      http://en.wikipedia.org/wiki/Storage_area_network [wikipedia.org]
      It's remote storage.
      Their new tech saves you the trouble of swapping HDs.
      It disrupts the people offering maintanence contracts.
      It stores whatever you want.

      http://www.xiotech.com/images/Reliability-Pyramid.gif [xiotech.com]
      My question:
      What is "Failing only one surface"
    • Re:Unclarity (Score:4, Interesting)

      by ILuvRamen (1026668) on Tuesday April 08, 2008 @01:51PM (#23003720)
      the less they tell you, the more hype it gets regardless of how good it is. Remember Vista? It was supposed to be the end all OS sent straight down from heaven but they didn't release any specifics. And now look what happened. I doubt a magical storage system that can heal itself and never fails or loses data is a bit of an exaggeration too.
    • Re:Unclarity (Score:3, Informative)

      by Xaedalus (1192463) <Xaedalys@NoSPaM.yahoo.com> on Tuesday April 08, 2008 @01:54PM (#23003748)
      Assuming you're being literal with your confusion... A SAN is a Storage Area Network that organizations use to back-up up data off their main networks. A lay explanation: think of your normal network and how it's connected. A SAN network (usually composed of fibre channel or SCSI connections) underlays that existing standard network and moves all the data you want to back up to disk or tape, without eating up the bandwidth you have on your normal network. It's usually driven by a back-up server, or sometimes by a normal server (but only if you want to eat up your processing power). What this disrupts (if it's true) is how a SAN network monitors itself. It's basically pro-active monitoring and a different configuration of spinning disks. I'm not sure which RAID array they're using, so it may not be as 'revolutionary' as they're proclaiming it to be. Please note: any network admins, PLEASE feel free to correct me if I'm wrong (because there's nothing worse than giving a layman explanation that's inaccurate).
      • Re:Unclarity (Score:5, Informative)

        by Anonymous Coward on Tuesday April 08, 2008 @01:59PM (#23003820)
        Just to clarify, SANs generally aren't used primarily for backups - they're just used for server storage to have a centralized and more thoroughly redundant setup than local disks (i.e. put your 40TB document repository on the SAN and connect it over fiber to the server, or have your server boot off the SAN instead of local disks, etc). They're treated like local disks by the OS, but are in a different location connected via fiber or nowadays via iscsi.

        While you can sometimes do some neat tricks with backups and a good SAN infrastructure, it's by no means its primary purpose in life.
      • Re:Unclarity (Score:5, Informative)

        by DJProtoss (589443) on Tuesday April 08, 2008 @02:10PM (#23003944)
        You could use it for that, but thats not the main use.
        It *is* a network like your ethernet network (with switches, adaptors, etc), but usually its a FC (fibre channel) rather than ethernet. You use a SAN to put your servers disks in another box to the server.
        But why would I do that? heat, consolidation, redundancy.
        A typical setup is to have a few 1u or 2u (rack heights are measured in u, which iirc is about 2") servers attached to a 3u storage controller.
        This is a box with lots (typically 14 in a 3u box) of drives. There will be a small computer controller in there too as well as some raid chips.
        Typically in a 14 drive box you might configure it as a pair of 5+1 raid 5 arrays and a couple of hot spares (5+1 means 5 drives of data and one parity drive). Effectively your 6 drives appear as one with 5x the capacity of 1 of the component drives. You can survive the loss of one drive without losing data. If you do have a drive go offline, the controller should transparantly start rebuilding the failed disk on one of the hot spares (and presumably raise a note via email or snmp that it needs a new disk).
        The controller is then configured to present these arrays (called volumes in storage speak) to specific servers (called hosts).
        The host will see each array as a single drive (/dev/sdX) that it uses as per normal, oblivious to the fact that its in a different box.
        Now to revisit the why we do this:
        1. heat - by putting all the hot bits (drives) together we can concentrate where the cooling goes
        2. reliability - any server using the above setup can have a disk fail and it simply won't notice. With the hot spare setup, you can potentially lose several drives between maintainance (as long as they don't happen at once).
        3. cost - you can buy bigger drives, then partition your array into smaller volumes (just like you partition your desktop machine's drive) and give different chunks to different hosts, reducing per GB cost (which when you are potentially talking about tera and peta bytes worth of disk space is rather important).
        as for what these guys are up to, I've not had a chance to look yet. I might post back.
      • Re:Unclarity (Score:5, Informative)

        by DJProtoss (589443) on Tuesday April 08, 2008 @02:21PM (#23004072)
        Ok, Have now rtfa'd. Basically, they have built a shiny controller/enclosure (the enclosure is the frame that contains the drives and the controller the circuit that interfaces, although to be confusing controllers often are built into enclosures (especially on the lower end) and still referred to as a controller)
        This controller is a sealed unit (read: better heat/vibration support, but not a user servicable component) with excess disks inside (multiple hot-spares, so even if several drives fail over time it keeps going), combined with the knowledge san techs across the globe know: most errors are transient, and if you pull the disk out and stick it back in, it will probably work again. They have just built a controller that does that for you automatically. Definately on the evolution rather than revolution side of things, and I have to admit I fail to see the disruption here, although I could well be missing something ( the whitepaper is somewhat light on details shall we say ).
        • Re:Unclarity (Score:3, Interesting)

          by jwgoerlich (661687) on Tuesday April 08, 2008 @04:23PM (#23005586) Homepage Journal

          better heat/vibration support, but not a user servicable component

          Heat is key here. Have you ever stood next to a petabyte of storage? Or even a few terabytes? Most Sans kick off a lot of heat from all those disks. When looking San to Hvac, 1 TB to 1 ton is typical.

          Xiotech's ISE mounts the disks on a very large aluminum alloy heat sink. The heat is wicked away from the drives. This makes for better heat dissipation and less heat on the disks, thus improving cooling and extending lifespan.

          Xiotech had a petabyte of storage on the SNW expo floor. I stood right next to it, surrounded by the crowd. The heat? Next to none. There was no additional cooling required for the demo either. It was completely ambient temperature. The cost savings in HVAC must be rather impressive.

          J Wolfgang Goerlich

    • by crovira (10242) on Tuesday April 08, 2008 @02:28PM (#23004152) Homepage
      I'm not going to educate you except to tell you to Google for it.

      The disruptive part is that it seems to be much more reliable which would mean that you can wave the tech goodbye for a while, instead of having to lose access to a sting of drives RAIDed together while they have to rebuild a drive which failed and needed replacement.

      Think of running XFS without having to worry about the drives' physical reliability because they're really reliable. (If you've got 5PB online it usually "which drive just failed", instead of "here's the data")

      "What does it store?" Jeez ... "Shoes" What the hell do you think it stores? How about data!

      But you are correct in that TFA didn't carry a price list of various configurations.
      • by Phishcast (673016) on Tuesday April 08, 2008 @03:38PM (#23005032)

        I'm not going to educate you except to tell you to Google for it. The disruptive part is that it seems to be much more reliable which would mean that you can wave the tech goodbye for a while, instead of having to lose access to a sting of drives RAIDed together while they have to rebuild a drive which failed and needed replacement.

        Umm...I think you forgot what the "R" in RAID stands for. You may have somewhat degraded performance during a rebuild when you spare in for a drive which has failed, but you don't lose access to any data because of a single disk failure (Save for RAID 0, which isn't really RAID to begin with).

        I wouldn't call this disruptive. It sounds like they've done some smart things to bring disks back to life when other hardware would call them failed, but you can bet that they're packaging more spares in these non-user serviceable enclosures than you would in a user serviceable configuration.

    • by Chrisje (471362) on Wednesday April 09, 2008 @02:18AM (#23010018)
      You're so right.

      As a Mass Storage man for HP, I was hoping to see something about cool Storage Area Network technology. Self-healing and fixing SAN's would indeed be a cool thing, because the way I see it I get more questions on the Infrastructure than the actual boxes in the SAN.

      If you look at current offerings from a couple of the major vendors, you'll see that there are boxes that are already guaranteeing 100% uptime and have all the redundancies and diagnostics built in to actually deliver the goods.

      The problem with Data storage hypes such as this (if it ever becomes a hype) is that people all of a sudden think that having such a box is a substitute for a decent support contract on the actual infrastructure or a backup.

      Your DBA inserts incorrect data into a production database. Your exchange server becomes virus ridden due to insufficient patch management. Your users delete that project folder that is very important. Someone snags the cable in between the server and the storage device because they're having a bad day. Windows machines on a SAN do tape polling thus disrupting the bus.

      The examples of configuration mishaps, logical data loss and sheer accidents far outstrip the instances where such a box would go down itself. So while it's nice that someone claims to have come up with a new Disk Box that will heal itself, the summary is misleading because it claims it will heal the SAN.

      Which it won't, judging by the Marketing BS I just read.
    • by severoon (536737) on Wednesday April 09, 2008 @12:16PM (#23014630) Journal
      Reading a site like this is going to expand your knowledge, but not by handing you all the information on a silver platter. All it can do is show you the door, but you are the one that must go through it.
  • Disruptive? (Score:4, Insightful)

    by 99BottlesOfBeerInMyF (813746) on Tuesday April 08, 2008 @01:51PM (#23003716)

    The result is quite impressive, a SAN that can practically heal itself, as well as prevent common failures.

    Maybe I'm missing something. I read their announcement and one of the articles on this new product. As near as I can tell they're selling SAN systems where instead of plugging in individual drives, you plug in a box with two drives in it. They paired this with some nice software for working around failed sectors and rewriting correctable drive problems. I guess I'm just not all that impressed. Is this really "disruptive" technology? It looks like evolutionary improvements and some nice automation to take some of the grunt work out of managing SAN.

    I'm, admittedly, not an expert on network storage. So what do people think? Is this really the best thing since sliced bread or just another slashvertisement someone hyped to sound like news for nerds and rehashing a lot of marketing weasel words?

    • by cdrudge (68377) on Tuesday April 08, 2008 @01:58PM (#23003812) Homepage
      Actually, I think instead of a a box with one drive...or two...you will have 10 3.5" or 20 2.5". So you have one big RAID-like cluster with a big "gas gauge" like dial on the front that tells you how much performance you have left...whatever that means. Whoopdedo. But I think that since they use an acronym every other word in the ESJ article that we should be very impressed.
      • Re:Disruptive? (Score:3, Interesting)

        by timeOday (582209) on Tuesday April 08, 2008 @02:23PM (#23004108)

        So you have one big RAID-like cluster with a big "gas gauge" like dial on the front that tells you how much performance you have left...whatever that means. Whoopdedo.
        I would call that a great thing. I've never understood why I couldn't just have a bank of a dozen drives with another 10 empty slots, and have it move data around automatically to increase performance and maintain redundancy. When enough data is stored or enough drives break that I'm close to losing redundancy, a light turns on, and I pop in another few drives and it keeps chugging.
        • Re:Disruptive? (Score:3, Interesting)

          by mochan_s (536939) on Tuesday April 08, 2008 @03:06PM (#23004594)

          I would call that a great thing. I've never understood why I couldn't just have a bank of a dozen drives with another 10 empty slots, and have it move data around automatically to increase performance and maintain redundancy. When enough data is stored or enough drives break that I'm close to losing redundancy, a light turns on, and I pop in another few drives and it keeps chugging.
          One reason I can think of is because there is a high correlation of drive failures to the power supply and equipment that it's on. I've seen centers that have 1 rack unit where the disks keep failing.
        • by swb (14022) on Tuesday April 08, 2008 @03:15PM (#23004720)
          I've never understood why they don't do this. I went and looked at a Xiotech Magnitude in 2002 at their offices here in the Twin Cities. They gave me the big dog and pony show (my current boss had bought two a year before at a different company) and when they were demoing the unit, I asked if you put new drives in if it restriped the existing data to include the new drives to make adding new LUNs more flexible. They looked sheepish and said no, the new drives had to be created as a new drive group.

          The SANs I've seen since then (admittedly all fairly low end, never gotten to use/manage one of the high and systems) all just look like direct-attach SCSI RAID with an integrated controller and a NIC/FC connector.

          You would think the idea would be to chuck in drives (with some minimum, like 8 or 12) and have the physical data storage be totally abstracted from the user, with N+2 redundancy and hotspare functionality totally guaranteed, and then allow the user to create LUNs without concern for underlying physical storage.

          When you need more space, you add more drives and the system manages the striping, re-striping as necessary for optimum throughput and maximum redundancy, rebuilding failed drives as necessary.
          • Re:Disruptive? (Score:3, Informative)

            by igjeff (15314) on Tuesday April 08, 2008 @03:35PM (#23004988)

            You would think the idea would be to chuck in drives (with some minimum, like 8 or 12) and have the physical data storage be totally abstracted from the user, with N+2 redundancy and hotspare functionality totally guaranteed, and then allow the user to create LUNs without concern for underlying physical storage.

            When you need more space, you add more drives and the system manages the striping, re-striping as necessary for optimum throughput and maximum redundancy, rebuilding failed drives as necessary.
            There are systems out there that do this sort of thing, but they're *expensive*.

            Take a look at HP's EVA line. They're really quite good at this.

            I'd be careful about using the terms "optimum" and "maximum" in that last paragraph, but they get quite close to that mark.

            Other vendors have equipment that performs about as well...IMO, the HP EVA line is the best at it, however.

            Jeff (only affiliated with HP as a mostly happy customer)
          • by johnlcallaway (165670) on Tuesday April 08, 2008 @03:42PM (#23005084)
            Having everything abstracted isn't always a good idea. Smart people will always be better at being able to get the most performance out of a machine, because we are creative and can think of how to use things in ways they weren't intended.

            It's nice having the ability to abstract things completely when performance isn't paramount, but when those performance bottlenecks start to become an issue, it's nice to remove the abstraction and start becoming more specific about how things interact.

            As a for instance, I remember a few years ago installing a SAN of sorts. The vendor's raid-10 wasn't fast enough, we would overrun the cache on bulk loads. So we built two raid-5s on two different controllers, and then 'mirrored' the controllers using the OS. Worked much better.
            • by swb (14022) on Tuesday April 08, 2008 @04:04PM (#23005370)
              Sure, nothing is as good as having a real human make the decision, but that scales really poorly.

            • by Phishcast (673016) on Wednesday April 09, 2008 @08:38AM (#23012026)
              Somebody already replied and said that what you're suggesting doesn't scale (they're right), but to add to that, here's why. When people invest in SANs and storage arrays, they typically want to take advantage of economies of scale and maximize utilization. That means they aren't going to put one application on the box and tune for it as you suggested, but they'll be putting 5, 20, or 100 applications on the same array. It's nearly impossible to tune each individual application's storage without dedicating spindles to each (or groups of each). Once you've done that, you may as well be buying separate storage for each application, and you're back to direct-attached storage without the economies and utilization you were after in the first place.

              Striping all data across as many spindles as possible ensures reasonably predictable performance, and almost always better performance than you would have had by hand-tuning.

          • by Phishcast (673016) on Tuesday April 08, 2008 @03:48PM (#23005146)
            Off the top of my head, all of the following companies have storage arrays which basically do exactly what you're asking for. When you create a LUN it's across all available spindles and data will re-balance across all available disks as you add more, all with RAID redundancy. I'm not sure about N+2 at this point, but RAID-6 is becoming ubiquitous in the storage industry.

            HP (EVA)
            3Par
            Dell/Equallogic
            Compellent
            Pillar
            HDS (USP)

            I'd be shocked if Xiotech doesn't do this today.

        • by Mr2001 (90979) on Tuesday April 08, 2008 @06:01PM (#23006538) Homepage Journal
          Drobo [drobo.com] does this. There's an LED gauge showing how full the array is, and when a drive needs replacing, the light next to it turns red. You can mix and match different sized drives, too - when it gets full, you can pop out a 250 GB and put in a 1 TB.

          The downside is it only has 4 bays and connects via USB 2.
          • by Guspaz (556486) on Wednesday April 09, 2008 @01:06PM (#23015304)
            Unfortunately, it's extremely expensive and all reviews point to abysmal performance. Also, doesn't even have an ethernet jack to act as NAS. Sort of kills the deal, don't you think?

            If, for that price (or less, those things are really overpriced), that had onboard GigE and fixed their performance problems, it'd probably be a decent deal.
            • by Mr2001 (90979) on Wednesday April 09, 2008 @05:07PM (#23018158) Homepage Journal
              Well, right, it's not really meant to compete with NAS. I think the performance problems are caused by the USB link. There is an attachment you can use to connect it to a network, but it's also expensive and doesn't improve performance.

              But if someone could convince them to put their technology to use in a professional storage array instead of a consumer-level RAID For Dummies...
    • Re:Disruptive? (Score:5, Informative)

      by kaiser423 (828989) on Tuesday April 08, 2008 @01:59PM (#23003826)
      well, RTFA. For mod points, it's disruptive because it runs Linux!

      The second article describes this very well. One big extra is that this system can perform all of the standard drive-repair operations that typically only OEMs can. This helps to keep you from replacing drives that aren't bad, but had a hiccup.

      It's also not just two drives in an ISE, but more like 10-20 (3.5" and 2.5" respectively) with a bunch of Linux software to give each ISE a pretty robust feature-set in itself. Then they also up the block size to 520 bytes, leaving space for data validity checks in order to keep the silent corruption problem from sneaking into the system.

      In the end, it's probably not wholly revolutionary. It does seem like an evolutionary jump though; with great performance, great feature set, and a very well thought out system that brings new technology and ideas to bear.
  • Sweet... (Score:3, Funny)

    by steppin_razor_LA (236684) on Tuesday April 08, 2008 @01:58PM (#23003802) Journal
    The disk healing features are very interesting.

    We have a Xiotech Magnitude that we paid ~$150K for in 2003 that is sitting around like a giant paper weight. Any takers? $3,000? $2,000? going once... going twice... :)

    • Re:Sweet... (Score:2, Interesting)

      by schklerg (1130369) on Tuesday April 08, 2008 @02:30PM (#23004166)
      I've got 3 collecting dust! And based on my experience with that SAN, I will never entertain the slightest sales pitch from any Xiotech rep. I'm sure they've gotten better, but rebooting after changing the contact info in the system is a bit absurd. Not to mention that the management / configuration was on a single IDE hard drive running MS-DOS. Since a reboot cleared all logs, tech support's stock answer for odd issues was, 'it was in a bad state'. Had it moved to Arkansas? BAH!
      • by steppin_razor_LA (236684) on Tuesday April 08, 2008 @03:40PM (#23005074) Journal
        'it was in a bad state'. Had it moved to Arkansas? BAH!

        HAHAHA :) :)

        My biggest bitch was that the $150K solution consisted of $37K of HW and the rest was software licenses and/or support that doesn't seem to be transferable. This basically means that there is no secondary market for the devices because anyone who would buy one would need to buy new software licenses. Since the SW licenses are more valuable than the HW, it wouldn't make sense to buy used HW. "nice"....

        The above weighed in heavily in our decision not to go with Xiotech for our second SAN.

        That said, the article was still interesting. :)
  • by Spazmania (174582) on Tuesday April 08, 2008 @01:58PM (#23003804) Homepage
    They've integrated the controller and drive into devices that consume 3U of space in a rackmount computer cabinet. So now you can't upgrade a drive, you can only replace a module. Brilliant.

    The only thing this is likely to disrupt is Xiotech's cashflow.
    • by Chas (5144) on Tuesday April 08, 2008 @02:48PM (#23004406) Homepage Journal
      If (big IF), the units eat drives (as in kill them in a non-repairable way), yeah, it's a bold and stupid move.

      If (another big IF) the unit keeps soft-failed drives (which weren't really bad to begin with) in play longer because it can recover them from *burps* in the system, then it's entirely possible that the unit could potentially be a money-saver.
    • You don't need to replace a module, because it doesn't break. See the failure rates/service event numbers from their presentations.

      People are so used to disks failing. Disks shouldn't fail as often as they do, and most of the time they don't fail at all - the storage controller is at fault because the drive and the controller have such a limited language (SCSI) to talk to each other with. ISEs do away with this limitation.
      • by MadMorf (118601) on Tuesday April 08, 2008 @03:19PM (#23004772) Homepage Journal
        You don't need to replace a module, because it doesn't break. See the failure rates/service event numbers from their presentations.

        Lab failure rates mean very little.

        Didn't Google just blow the lid off of the disk manufacturers MTBF numbers, by reporting their own failure rates as being an order of magnitude higher?

        Wait till they have a few thousands of these deployed. They we'll kknow how good they really are...
        And, that's when companies with big money to spend will take notice.

      • by Spazmania (174582) on Tuesday April 08, 2008 @03:38PM (#23005026) Homepage
        Thing is, I spent the last couple years playing this game. I started with a dozen 36-gig scsi disks that had bad sectors on them. I did thorough tests abandoned the whole gigabyte where the bad sectors were found and software-raid-5'd partitions from multiple drives, skipping those bad parts.

        Guess what? It didn't work out. The bad zones spread and they spread faster than the the raid software could detect the new failure and rebuild onto the spare.

        I quite enjoyed the experiment, but these were on my home servers. I wouldn't dream of doing this in a production environment. When the raid controller kicks the drive for -any- reason, it's back to the manufacturer for warranty replacement. The data is far to valuable to play games with it.
        • by Guspaz (556486) on Wednesday April 09, 2008 @01:11PM (#23015374)
          You'd detect the failure the instant you tried to read the sector, and could speed up this process by using idle time to check sectors for corruption (checksums help). With double parity, you'd have to have the same stripe go bad on three different disks to lose data, and I have trouble believing that the failures would spread that fast.

          I mean, I still wouldn't trust it in a production environment, as you said, I just wonder how a more dynamic system (RAID-Z, for example) would handle the same scenario. At the very least if you decided to simply try to get one clean copy of all data off the disks to migrate it to backups to prevent data loss.

          As in, your test would be constantly reading all data on the disk in a loop, and seeing how long it would take before data was lost permanently.
    • by tppublic (899574) on Tuesday April 08, 2008 @03:36PM (#23005002)
      Honestly, there isn't much cash flow to disrupt. This isn't EMC, HP, IBM or Hitachi.

      The purpose of this product isn't to penetrate large data centers... of if Xiotech thinks it is, then they need new marketing employees (and quickly). Large data centers HAVE the expertise on site to do individual disk replacements, and those large enterprise data centers will demand the feature sets that exist in the much larger equipment from the larger vendors named above.

      This is targeted at much smaller data centers, probably those with very simple SANs (think a dozen or two servers), where the data center management skills won't match those in the larger data centers (simply because you have one or two generalists, not a dozen+ specialists). For those smaller sites, the return on investment for a system that requires less maintenance (and also less expertise) may make sense...

      Yes, this is evolutionary from a technical perspective, but it still approaches the solution in an interesting way... and may find its own market niche.

  • by RobertB-DC (622190) * on Tuesday April 08, 2008 @01:58PM (#23003814) Homepage Journal
    So if San is the technology, the drive that implements it would be called Chihiro, right?

    Oh, that was Sen [nausicaa.net] . My bad, sorry.

    (Well, it makes as much sense as anything. It's not like I'm going to bother reading TFA when it's clearly marked "hype".)
  • by nguy (1207026) on Tuesday April 08, 2008 @02:01PM (#23003856)
    I don't see anything particularly "disruptive" about this. Lots of storage systems are "self healing" and based on hot-swappable elements.

    The whole thing sounds like astroturfing.
  • by 93 Escort Wagon (326346) on Tuesday April 08, 2008 @02:14PM (#23003992)
    Why do people keep referring to incremental improvements to existing technology as "disruptive"? It's pretty obvious people don't understand the phrase "disruptive technology".

    My favorite misuse was when a marketing droid referred to Intel moving from a .65nm fab to .45nm as "disruptive". It's not just marketing folks, however - I've heard engineers and even my own college professors (usually if they're trying to turn their research into something commercially advantageous) do this.
    • by melted (227442) on Tuesday April 08, 2008 @02:56PM (#23004478) Homepage
      >> referred to Intel moving from a .65nm fab to .45nm as "disruptive"

      Disrupted AMD pretty good, from what I can see.
    • by Idiomatick (976696) on Tuesday April 08, 2008 @02:57PM (#23004486)
      Its pretty obvious you don't understand how resistant markets are to change. Anything new and not garbage that escapes past the board of directors is shocking and disruptive.
       
      Sorry about the pessimism(its exam week)
    • by Ungrounded Lightning (62228) on Tuesday April 08, 2008 @03:16PM (#23004730) Journal
      It's pretty obvious people don't understand the phrase "disruptive technology".

      Hey!

        - If Microsoft's marketing department can redefine "Wizard" from "Human computer expert acknowledged as exceptionally skilled by his peers" to "only moderately brain-damaged menu-driven installation/configuration tool",

        - why can't Xiotech's marketing department redefine "disruptive technology" from "quantum leap in price/performance ratio of a competing technology based on a massively different architecture that makes it out-compete and displace the previous market-dominating solution" to "incremental generational upgrade in the latest model of our product which we hope will convince you to replace the competitor's product with ours (and disrupt both his business plan and your IT operation)"?
    • by networkBoy (774728) on Tuesday April 08, 2008 @03:59PM (#23005296) Homepage Journal
      it's 65 and 45nM or .065 and .045uM
      pedantic, I know...
      but .45nM would be phenomenally disruptive as it would literally be two orders of magnitude better litho than what is currently attainable commercially.

      -nB
      • by epine (68316) on Wednesday April 09, 2008 @11:37PM (#23020956)

        it's 65 and 45nM or .065 and .045uM
        pedantic, I know...
        Not nearly pedantic enough, unless you knew Herr Meter personally. Tell me, Herr Meter, why were you named that way? What did you discover to make yourself famous? And why did you say that Ångström deserved what he got?

        Funny, I was thinking on the way home that the intergalactic subway machine in Contact blew up because the alien schematic contained a typo calling for 1 eV, and the people building it failed to read it as 1 exavolt.

        Wikipedia tells me that 1 EeV/c = 1.783×1018 kg Really? I thought 1 EeV would be more impressive. I guess it's not the exotic extraterrestrial vroom vroom I thought it was. No, wait, what am I talking about, vroom vroom is v.
  • Compellent (Score:4, Informative)

    by Anonymous Coward on Tuesday April 08, 2008 @02:14PM (#23003994)
    I'm not sure why this announcement is really news...somewhat interesting though, is that many of the founders and former employees of Xiotech have left to start a company called Compellent http://www.compellent.com/ [compellent.com]. Compellent's disk technology, imo, is a lot slicker than Xiotech's, particularly their "automated tiered storage".
    • Re:Compellent (Score:3, Interesting)

      by medelliadegray (705137) on Tuesday April 08, 2008 @03:31PM (#23004940)
      I admin a compellent san, and i find their "automated tiered storage" to be great in concept, marketed superbly, yet highly expensive and highly lacking in configurability.

      If you want data automatically moved down to a slower tier, but it gets touched just once a day. Good luck in getting it to move down automatically.

      I anxiously await the day when the SAN market is acknowledged as the scam it is (a glorified raid controller), and the various SAN companies die off in droves or become an everyday appliance they really are. It's obscene paying a grand for a run of the mill sata disk, and additionally paying about as much or more than the disk in various licenses. All the while gouging you yearly for 'support' contracts which are a sizable fraction of the cost of both hardware/disks/and licenses.

      Hurray for hemorrhaging cash!
      • by steppin_razor_LA (236684) on Tuesday April 08, 2008 @03:44PM (#23005100) Journal
        The idea that decisions can about what disk/RAID class is best can be made by the HW on a block by block basis is very slick.

        We didn't shell out the $s for the licenses because the old model (i.e. my databases are RAID-10, my file servers are RAID-5, etc) works "good enough" when compared to the sticker of the automated tiered storage licenses.
      • by jwgoerlich (661687) on Tuesday April 08, 2008 @04:30PM (#23005656) Homepage Journal

        If you want data automatically moved down to a slower tier, but it gets touched just once a day.

        Data progression does the moving. DP only runs once a day by default, but you can change this schedule. You can also kick DP off manually. How? Ask Co-pilot.

        J Wolfgang Goerlich

      • by Lxy (80823) on Wednesday April 09, 2008 @09:53AM (#23012932) Journal
        I wouldn't call a SAN a "scam". You get what you pay for.

        Want a big box full of disk, fully redundant (RAID 1,5,5+1,10, etc)? What it cheap? Got spare parts? Then, my friend, FreeNAS is for you. A homebrewed SAN that delivers enterprise capable performance for practically nothing.

        Oh, you want FAST disk? OK, then you have to shell out for SAS or FC disk. Can still use FreeNAS, but now your hardware costs have gone up.

        Your box has multile SAS controllers, multiple SAS drives, and now what? Gotta go external. More parts, more cost.

        Want professional support on that FreeNAS box? Ummm... whooops. OK, then instead of FreeNAS let's go with Lefthand networks. Similar product to FreeNAS, but carries a price tag for support.

        Wait, we just spent a lot of money to build a SAN to our liking. And it's still just a bunch of crap I cobbled together. May as well just have gone to a vendor like Xiotech, Compellent, NetAPP, Equalogic, whoever and bought the real thing.

        Then enter larger corporations that have demand for high performance, full management, automatic everything, and of course a 24x7 tech-lives-onsite kind of contract. That's where EMC, HP, and the like have their products.

        So you see, there's an option for every need. There's an option for every budget. Some people have a legitimate need to drop some cash on a big reliable SAN. Some don't. Pick your price, but don't call it a "scam".
    • by Lxy (80823) on Wednesday April 09, 2008 @09:38AM (#23012742) Journal
      Considering that Compellent and Xiotech both run Seagate disks, how is the "disk technology" better in Compellent? Keep in mind that Compellent is merely a consumer of Seagate disks, Xiotech now owns a piece of Seagate that makes this new SAN do what it does.

      Also, doesn't Xioetch do automated tiers of storage (ILM or some weird acronym)?

      Btw, I don't work for either of these companies, but I have evaluated both products extensively.
  • by Anonymous Coward on Tuesday April 08, 2008 @02:17PM (#23004040)
    On the Xiotech site:

    "#1 Lowest cost per disk IOPS
    #1 Lowest cost per MB/sec"

    Looking around, I don't see any quoted prices on the page.

    It's funny how it's always a project in itself to find the price tag for products. When companies run on "the bottom line" why are they so reluctant to tell us what the consumer's "bottom line" is straight forward and upfront?

    It should become law; that to advertise a product, you must post clearly what the price tag (range) is either at the top or bottom. Especially if you are telling people its "cost-effective" without providing the cost. Am I saving $1 or $10,000?

    Captcha: increase
    • by sarabob (544622) on Tuesday April 08, 2008 @04:00PM (#23005314)
      There's a link in the white paper to the benchmarks (http://www.storageperformance.org/results/benchmark_results_spc1#a00064), which then gives you pricing info on the tested configurations.

      A 1TB array with 40, 15k 2.5" drives in raid 1 is $36,500 (list price is $61k, the price used by the spc has a 40% discount!) with a three-year, 24/7 4hr maintenance contract. It generates 8,720.12 SPC-1 IOPS, making it $4.19/IOP

      The other tested config used 20 146GB drives to get 5,800 IOPS for $21k, $3.53/IOP.

      (a 12TB netapp system, FAS3040 gets 31k IOPS for $420k = $13.61/IOP as a comparison, no 40% discount here :-)).

      Now double check the quotes. "World Record SPC Benchmark 1T: Lowest cost per SPC-1 IOPS1". Hmmm. RamSan400 gets 291,208 IOPS for $194k, at $0.67/IOP (Some places on the xiotech website say 'lowest cost per disk IOPS' as some kind of get out clause, but not all.)

      Interesting that the support contract for netapp & EMC appears to be as much as a xiotech array + contract, although they do seem to have 140-150 disks in the tested configurations rather than a measly 20 :-)

  • by SwashbucklingCowboy (727629) on Tuesday April 08, 2008 @02:54PM (#23004466)
    Well, no. It's an array that can practically heal itself (at least in theory). BIG difference...
  • by bperkins (12056) on Tuesday April 08, 2008 @03:08PM (#23004616) Homepage Journal
    Each datapac in the ISE presents a gas-gauge-like monitor showing you how much performance is being used [...]

    If you can use your gas gauge to measure how fast you're going, you're probably driving too fast.
  • by MadMorf (118601) on Tuesday April 08, 2008 @03:14PM (#23004688) Homepage Journal
    Nothing really new here, except the box is sealed, which means when they have bought a batch of disks with an undiscovered flaw, there's no way to fix or replace them...

    Seagate Tomcats anyone?

    Also, would you trust your enterprise storage to laptop drives? Running 24/7/36...

    How long will those last?

    Hell, most SATA disks are unsuitable for anything but nearline storage, and even then, they're iffy...Keep plenty of spares!
  • by TheSync (5291) * on Tuesday April 08, 2008 @03:47PM (#23005130) Journal
    In the broadcast engineering space, we see a lot of this kind of thing...

    Avid Unity ISIS [avid.com]
    Omneon MediaGrid [omneon.com]
    DataDirect S2A [datadirectnet.com]

  • by heroine (1220) on Tuesday April 08, 2008 @06:25PM (#23006748) Homepage
    This is so arcane. It's like sitting on Tasman Dr. watching Net Appliance buy up lot after lot & VA Linux just announced a new thing called a build to order NAS. NAS is the future again. Buy Excite.com!

What good is a ticket to the good life, if you can't find the entrance?

Working...