Forgot your password?
typodupeerror
Data Storage

Xiotech Unveils Disruptive Storage Technology 145

Posted by kdawson
from the storage-that's-smarter-than-you-are dept.
Lxy writes "After Xiotech purchased Seagate's Advanced Storage Architecture division, rumors circulated around what they were planning to do with their next-generation SAN. Today at Storage Network World, Xiotech answered the question. The result is quite impressive, a SAN that can practically heal itself, as well as prevent common failures. There's already hype in the media, with much more to come. The official announcement is on Xiotech's site."
This discussion has been archived. No new comments can be posted.

Xiotech Unveils Disruptive Storage Technology

Comments Filter:
  • Okay, so at a brief glance it's looking like next-gen primary disk storage. I didn't see any mention of which RAID it is (although I'm thinking they're probably going RAID 10??? Maybe 6?). What's cool though (at least by my opinion) is that it's going to cut down on SAN errors through self-diagnosis. Interesting, will have to check through the white paper.
    • by maharg (182366) on Tuesday April 08, 2008 @01:56PM (#23003776) Homepage Journal
      not only self-diagnosis, but onboard disk remanufacturing ffs

      100 or so engineers involved in the project have replicated Seagate's own processes for drive telemetry monitoring and error detection -- and drive re-manufacturing -- in firmware on the Linux-based ISE. ISE automatically performs preventive and remedial processes. It can reset disks, power cycle disks, implement head-sparing operations, recalibrate and optimize servos and heads, perform reformats on operating drives, and rewrite entire media surfaces if needed. Everything that Seagate would do if you returned a drive for service.
    • Re: (Score:3, Interesting)

      by hackus (159037)
      Exactly what we do not need.

      Next generation hardware that is patent encumbered and will require a lawyer and several court proceedings for anyone and everyone to get their data back.

      I mean come on, when is the industry going to figure out we do not need proprietary, closed storage solutions that are a rehash of the old IBM AS/400 days when you could only buy super expensive IBM gear.

      No thanks I will take my open code and commodity hardware and build solutions that will kick this patented solutions arse at 1
  • Unclarity (Score:2, Interesting)

    by Eudial (590661)
    The unclarity!

    These are just some of the questions popping into my head:
    What is SAN?
    What does it do?
    How is it disruptive?
    Who does it disrupt?
    What does it store?

    Can't say skimming through TFA makes it a lot clearer either.

    Also, two obscure articles is media buzz?
    • by Yetihehe (971185)
      Not every article is adressed to people which are not in field of storage...
      • They should atleast give poiners at where to look... and I mean exact links, not "google it".
        • by Anonymous Coward on Tuesday April 08, 2008 @02:01PM (#23003860)
          Let's see now... ah! I've got it. Here's an exact link for you: http://www.google.com/search?hl=en&q=san [google.com]
        • by Yetihehe (971185)
          I have seen some commentary on wired, which essentially sayed using internet and wikipedia can be compared to using brain enchancers, i.e. wiki user is expanding their brain with knowledge which he can "remember" just by typing words (like searching your memory for something you don't remember easily). This said, it looks like you are not using your intelligence and even don't want to augment it in any way. It would take you only 2 seconds to make a search, but you've chosen to remain in ignorance.

          (* This
    • Re:Unclarity (Score:5, Informative)

      by TubeSteak (669689) on Tuesday April 08, 2008 @01:51PM (#23003714) Journal

      What is SAN?
      What does it do?
      How is it disruptive?
      Who does it disrupt?
      What does it store?
      http://en.wikipedia.org/wiki/Storage_area_network [wikipedia.org]
      It's remote storage.
      Their new tech saves you the trouble of swapping HDs.
      It disrupts the people offering maintanence contracts.
      It stores whatever you want.

      http://www.xiotech.com/images/Reliability-Pyramid.gif [xiotech.com]
      My question:
      What is "Failing only one surface"
      • by gnick (1211984) on Tuesday April 08, 2008 @03:16PM (#23004738) Homepage

        It stores whatever you want.
        I'd like mine to store beer and bacon. Any idea on the capacity or replication capabilities?
      • by Jesus_666 (702802)
        Perhaps it somehow prevents a scenario where more than one platter is inaccessible at the same time? Although I guess you'd have to switch to single-platter hard drives in order to even attempt that...
      • What is "Failing only one surface"

        A hard drive can fail in many ways: sector, track, platter, head. ISE can fail just the one surface -- say, a platter -- and keep writing to the remaining device. The broken platter is removed from service while the remaining disk storage continues to be used until end of life.

        This is all done automatically and transparently. What they are trying to eliminate is the time it takes for someone to physically swap out a disk.

        J Wolfgang Goerlich

      • by ErMaC (131019)
        Disk drives are made up of surfaces on platters. Generally, disk drives have multiple platters, with each platter having a top and bottom surface.

        Currently, in a disk if one chunk of a surface has a problem, the whole disk is bad. The disk has no way to communicate which part has died.
        Xiotech's hooks into the firmware allow it to write around bad areas on the surface of a disk, and when a portion of a surface does fail it only has to rebuild that portion, rather than the entire disk drive.
    • Re:Unclarity (Score:4, Interesting)

      by ILuvRamen (1026668) on Tuesday April 08, 2008 @01:51PM (#23003720)
      the less they tell you, the more hype it gets regardless of how good it is. Remember Vista? It was supposed to be the end all OS sent straight down from heaven but they didn't release any specifics. And now look what happened. I doubt a magical storage system that can heal itself and never fails or loses data is a bit of an exaggeration too.
      • indeed. servers that ``fix themselves'' with magic pixie dust... haven't we heard this 10 years ago from IBM?
        • lol I worked for IBM for 3 days through a contractor and they couldn't even pull some magic pixie dust out of their ass to get their hand scanner PDAs to work hehehe.
    • Re: (Score:3, Informative)

      by Xaedalus (1192463)
      Assuming you're being literal with your confusion... A SAN is a Storage Area Network that organizations use to back-up up data off their main networks. A lay explanation: think of your normal network and how it's connected. A SAN network (usually composed of fibre channel or SCSI connections) underlays that existing standard network and moves all the data you want to back up to disk or tape, without eating up the bandwidth you have on your normal network. It's usually driven by a back-up server, or sometime
      • Re:Unclarity (Score:5, Informative)

        by Anonymous Coward on Tuesday April 08, 2008 @01:59PM (#23003820)
        Just to clarify, SANs generally aren't used primarily for backups - they're just used for server storage to have a centralized and more thoroughly redundant setup than local disks (i.e. put your 40TB document repository on the SAN and connect it over fiber to the server, or have your server boot off the SAN instead of local disks, etc). They're treated like local disks by the OS, but are in a different location connected via fiber or nowadays via iscsi.

        While you can sometimes do some neat tricks with backups and a good SAN infrastructure, it's by no means its primary purpose in life.
      • Re:Unclarity (Score:5, Informative)

        by DJProtoss (589443) on Tuesday April 08, 2008 @02:10PM (#23003944)
        You could use it for that, but thats not the main use.
        It *is* a network like your ethernet network (with switches, adaptors, etc), but usually its a FC (fibre channel) rather than ethernet. You use a SAN to put your servers disks in another box to the server.
        But why would I do that? heat, consolidation, redundancy.
        A typical setup is to have a few 1u or 2u (rack heights are measured in u, which iirc is about 2") servers attached to a 3u storage controller.
        This is a box with lots (typically 14 in a 3u box) of drives. There will be a small computer controller in there too as well as some raid chips.
        Typically in a 14 drive box you might configure it as a pair of 5+1 raid 5 arrays and a couple of hot spares (5+1 means 5 drives of data and one parity drive). Effectively your 6 drives appear as one with 5x the capacity of 1 of the component drives. You can survive the loss of one drive without losing data. If you do have a drive go offline, the controller should transparantly start rebuilding the failed disk on one of the hot spares (and presumably raise a note via email or snmp that it needs a new disk).
        The controller is then configured to present these arrays (called volumes in storage speak) to specific servers (called hosts).
        The host will see each array as a single drive (/dev/sdX) that it uses as per normal, oblivious to the fact that its in a different box.
        Now to revisit the why we do this:
        1. heat - by putting all the hot bits (drives) together we can concentrate where the cooling goes
        2. reliability - any server using the above setup can have a disk fail and it simply won't notice. With the hot spare setup, you can potentially lose several drives between maintainance (as long as they don't happen at once).
        3. cost - you can buy bigger drives, then partition your array into smaller volumes (just like you partition your desktop machine's drive) and give different chunks to different hosts, reducing per GB cost (which when you are potentially talking about tera and peta bytes worth of disk space is rather important).
        as for what these guys are up to, I've not had a chance to look yet. I might post back.
        • You forgot the most important bit speed. SAN's are orders of magnitude faster than most internal hardware raid. Think many GB's of battery backed up write cache.
      • Re:Unclarity (Score:5, Informative)

        by DJProtoss (589443) on Tuesday April 08, 2008 @02:21PM (#23004072)
        Ok, Have now rtfa'd. Basically, they have built a shiny controller/enclosure (the enclosure is the frame that contains the drives and the controller the circuit that interfaces, although to be confusing controllers often are built into enclosures (especially on the lower end) and still referred to as a controller)
        This controller is a sealed unit (read: better heat/vibration support, but not a user servicable component) with excess disks inside (multiple hot-spares, so even if several drives fail over time it keeps going), combined with the knowledge san techs across the globe know: most errors are transient, and if you pull the disk out and stick it back in, it will probably work again. They have just built a controller that does that for you automatically. Definately on the evolution rather than revolution side of things, and I have to admit I fail to see the disruption here, although I could well be missing something ( the whitepaper is somewhat light on details shall we say ).
        • Re: (Score:3, Interesting)

          by jwgoerlich (661687)

          better heat/vibration support, but not a user servicable component

          Heat is key here. Have you ever stood next to a petabyte of storage? Or even a few terabytes? Most Sans kick off a lot of heat from all those disks. When looking San to Hvac, 1 TB to 1 ton is typical.

          Xiotech's ISE mounts the disks on a very large aluminum alloy heat sink. The heat is wicked away from the drives. This makes for better heat dissipation and less heat on the disks, thus improving cooling and extending lifespan.

          Xiotech had

    • I'm not going to educate you except to tell you to Google for it.

      The disruptive part is that it seems to be much more reliable which would mean that you can wave the tech goodbye for a while, instead of having to lose access to a sting of drives RAIDed together while they have to rebuild a drive which failed and needed replacement.

      Think of running XFS without having to worry about the drives' physical reliability because they're really reliable. (If you've got 5PB online it usually "which drive just failed"
      • by Phishcast (673016)

        I'm not going to educate you except to tell you to Google for it. The disruptive part is that it seems to be much more reliable which would mean that you can wave the tech goodbye for a while, instead of having to lose access to a sting of drives RAIDed together while they have to rebuild a drive which failed and needed replacement.

        Umm...I think you forgot what the "R" in RAID stands for. You may have somewhat degraded performance during a rebuild when you spare in for a drive which has failed, but you don't lose access to any data because of a single disk failure (Save for RAID 0, which isn't really RAID to begin with).

        I wouldn't call this disruptive. It sounds like they've done some smart things to bring disks back to life when other hardware would call them failed, but you can bet that they're packaging more spares in these n

        • by afidel (530433)
          They have 20% spare capacity which IS a lot more than you would put in a user servicable enclosure. The great thing is you get the speed advantage of the spares because while they limit the amount of data use to 80% they are normally using all spindles in the pack.
    • by Chrisje (471362)
      You're so right.

      As a Mass Storage man for HP, I was hoping to see something about cool Storage Area Network technology. Self-healing and fixing SAN's would indeed be a cool thing, because the way I see it I get more questions on the Infrastructure than the actual boxes in the SAN.

      If you look at current offerings from a couple of the major vendors, you'll see that there are boxes that are already guaranteeing 100% uptime and have all the redundancies and diagnostics built in to actually deliver the goods.

      The
    • by severoon (536737)
      Reading a site like this is going to expand your knowledge, but not by handing you all the information on a silver platter. All it can do is show you the door, but you are the one that must go through it.
  • Disruptive? (Score:4, Insightful)

    by 99BottlesOfBeerInMyF (813746) on Tuesday April 08, 2008 @01:51PM (#23003716)

    The result is quite impressive, a SAN that can practically heal itself, as well as prevent common failures.

    Maybe I'm missing something. I read their announcement and one of the articles on this new product. As near as I can tell they're selling SAN systems where instead of plugging in individual drives, you plug in a box with two drives in it. They paired this with some nice software for working around failed sectors and rewriting correctable drive problems. I guess I'm just not all that impressed. Is this really "disruptive" technology? It looks like evolutionary improvements and some nice automation to take some of the grunt work out of managing SAN.

    I'm, admittedly, not an expert on network storage. So what do people think? Is this really the best thing since sliced bread or just another slashvertisement someone hyped to sound like news for nerds and rehashing a lot of marketing weasel words?

    • by cdrudge (68377)
      Actually, I think instead of a a box with one drive...or two...you will have 10 3.5" or 20 2.5". So you have one big RAID-like cluster with a big "gas gauge" like dial on the front that tells you how much performance you have left...whatever that means. Whoopdedo. But I think that since they use an acronym every other word in the ESJ article that we should be very impressed.
      • Re: (Score:3, Interesting)

        by timeOday (582209)

        So you have one big RAID-like cluster with a big "gas gauge" like dial on the front that tells you how much performance you have left...whatever that means. Whoopdedo.

        I would call that a great thing. I've never understood why I couldn't just have a bank of a dozen drives with another 10 empty slots, and have it move data around automatically to increase performance and maintain redundancy. When enough data is stored or enough drives break that I'm close to losing redundancy, a light turns on, and I pop

        • Re: (Score:3, Interesting)

          by mochan_s (536939)

          I would call that a great thing. I've never understood why I couldn't just have a bank of a dozen drives with another 10 empty slots, and have it move data around automatically to increase performance and maintain redundancy. When enough data is stored or enough drives break that I'm close to losing redundancy, a light turns on, and I pop in another few drives and it keeps chugging.

          One reason I can think of is because there is a high correlation of drive failures to the power supply and equipment that it'

        • by swb (14022)
          I've never understood why they don't do this. I went and looked at a Xiotech Magnitude in 2002 at their offices here in the Twin Cities. They gave me the big dog and pony show (my current boss had bought two a year before at a different company) and when they were demoing the unit, I asked if you put new drives in if it restriped the existing data to include the new drives to make adding new LUNs more flexible. They looked sheepish and said no, the new drives had to be created as a new drive group.

          The SA
          • Re: (Score:3, Informative)

            by igjeff (15314)

            You would think the idea would be to chuck in drives (with some minimum, like 8 or 12) and have the physical data storage be totally abstracted from the user, with N+2 redundancy and hotspare functionality totally guaranteed, and then allow the user to create LUNs without concern for underlying physical storage.

            When you need more space, you add more drives and the system manages the striping, re-striping as necessary for optimum throughput and maximum redundancy, rebuilding failed drives as necessary.

            There are systems out there that do this sort of thing, but they're *expensive*.

            Take a look at HP's EVA line. They're really quite good at this.

            I'd be careful about using the terms "optimum" and "maximum" in that last paragraph, but they get quite close to that mark.

            Other vendors have equipment that performs about as well...IMO, the HP EVA line is the best at it, however.

            Jeff (only affiliated with HP as a mostly happy customer)

          • Having everything abstracted isn't always a good idea. Smart people will always be better at being able to get the most performance out of a machine, because we are creative and can think of how to use things in ways they weren't intended.

            It's nice having the ability to abstract things completely when performance isn't paramount, but when those performance bottlenecks start to become an issue, it's nice to remove the abstraction and start becoming more specific about how things interact.

            As a for instance, I
            • by swb (14022)
              Sure, nothing is as good as having a real human make the decision, but that scales really poorly.

            • by Phishcast (673016)
              Somebody already replied and said that what you're suggesting doesn't scale (they're right), but to add to that, here's why. When people invest in SANs and storage arrays, they typically want to take advantage of economies of scale and maximize utilization. That means they aren't going to put one application on the box and tune for it as you suggested, but they'll be putting 5, 20, or 100 applications on the same array. It's nearly impossible to tune each individual application's storage without dedicati
          • Off the top of my head, all of the following companies have storage arrays which basically do exactly what you're asking for. When you create a LUN it's across all available spindles and data will re-balance across all available disks as you add more, all with RAID redundancy. I'm not sure about N+2 at this point, but RAID-6 is becoming ubiquitous in the storage industry.

            HP (EVA)
            3Par
            Dell/Equallogic
            Compellent
            Pillar
            HDS (USP)

            I'd be shocked if Xiotech doesn't do this today.

            • by swb (14022)
              The Equalogic we have at work doesn't work like this, or at least it doesn't seem that way to me. LUNs don't seem striped across all disks from what I can tell, but I haven't worked with it in a while.
              • by Phishcast (673016)
                Well, I specifically asked this question of an Equallogic salesman at the bar last year and he told me this was how it worked. He may have been blowing smoke or not understood the question. Either way, he was buying :).
        • by Mr2001 (90979)
          Drobo [drobo.com] does this. There's an LED gauge showing how full the array is, and when a drive needs replacing, the light next to it turns red. You can mix and match different sized drives, too - when it gets full, you can pop out a 250 GB and put in a 1 TB.

          The downside is it only has 4 bays and connects via USB 2.
          • by Guspaz (556486)
            Unfortunately, it's extremely expensive and all reviews point to abysmal performance. Also, doesn't even have an ethernet jack to act as NAS. Sort of kills the deal, don't you think?

            If, for that price (or less, those things are really overpriced), that had onboard GigE and fixed their performance problems, it'd probably be a decent deal.
            • by Mr2001 (90979)
              Well, right, it's not really meant to compete with NAS. I think the performance problems are caused by the USB link. There is an attachment you can use to connect it to a network, but it's also expensive and doesn't improve performance.

              But if someone could convince them to put their technology to use in a professional storage array instead of a consumer-level RAID For Dummies...
    • Re:Disruptive? (Score:5, Informative)

      by kaiser423 (828989) on Tuesday April 08, 2008 @01:59PM (#23003826)
      well, RTFA. For mod points, it's disruptive because it runs Linux!

      The second article describes this very well. One big extra is that this system can perform all of the standard drive-repair operations that typically only OEMs can. This helps to keep you from replacing drives that aren't bad, but had a hiccup.

      It's also not just two drives in an ISE, but more like 10-20 (3.5" and 2.5" respectively) with a bunch of Linux software to give each ISE a pretty robust feature-set in itself. Then they also up the block size to 520 bytes, leaving space for data validity checks in order to keep the silent corruption problem from sneaking into the system.

      In the end, it's probably not wholly revolutionary. It does seem like an evolutionary jump though; with great performance, great feature set, and a very well thought out system that brings new technology and ideas to bear.
      • They've added the OEM services to THE DEVICE ITSELF. (evolution)

        They've made those OEM services on the device AUTOMATICALLY kick in. (evolution step 2)

        They've sealed the units. (evolution)

        Which, in effect, means that most of the SAN expertise that FORMERLY required an experienced tech is now incorporated and these SAN's can be installed and "maintained" by less technically skilled personnel.

        Which will make these devices VERY easy to sell. You pay ONCE for the tech and save on the cost of the technician's sa
        • by Amouth (879122)
          I already requested a price sheet - the sales rep I talked to didn't know what it was and is going to give me a call back..

          we shall see if they are affordable
  • Sweet... (Score:3, Funny)

    by steppin_razor_LA (236684) on Tuesday April 08, 2008 @01:58PM (#23003802) Homepage Journal
    The disk healing features are very interesting.

    We have a Xiotech Magnitude that we paid ~$150K for in 2003 that is sitting around like a giant paper weight. Any takers? $3,000? $2,000? going once... going twice... :)

    • Re: (Score:2, Interesting)

      by schklerg (1130369)
      I've got 3 collecting dust! And based on my experience with that SAN, I will never entertain the slightest sales pitch from any Xiotech rep. I'm sure they've gotten better, but rebooting after changing the contact info in the system is a bit absurd. Not to mention that the management / configuration was on a single IDE hard drive running MS-DOS. Since a reboot cleared all logs, tech support's stock answer for odd issues was, 'it was in a bad state'. Had it moved to Arkansas? BAH!
      • 'it was in a bad state'. Had it moved to Arkansas? BAH!

        HAHAHA :) :)

        My biggest bitch was that the $150K solution consisted of $37K of HW and the rest was software licenses and/or support that doesn't seem to be transferable. This basically means that there is no secondary market for the devices because anyone who would buy one would need to buy new software licenses. Since the SW licenses are more valuable than the HW, it wouldn't make sense to buy used HW. "nice"....

        The above weighed in heavily in our decis
        • by Jesus_666 (702802)
          Well, you could rip out most of the internals and put generic PC components in. Then you end up with a 37 Kilodollar case mod.
  • by Spazmania (174582) on Tuesday April 08, 2008 @01:58PM (#23003804) Homepage
    They've integrated the controller and drive into devices that consume 3U of space in a rackmount computer cabinet. So now you can't upgrade a drive, you can only replace a module. Brilliant.

    The only thing this is likely to disrupt is Xiotech's cashflow.
    • by Chas (5144)
      If (big IF), the units eat drives (as in kill them in a non-repairable way), yeah, it's a bold and stupid move.

      If (another big IF) the unit keeps soft-failed drives (which weren't really bad to begin with) in play longer because it can recover them from *burps* in the system, then it's entirely possible that the unit could potentially be a money-saver.
      • by Spazmania (174582)
        The odds of the soft fail savings catching up to the difference in economy of scale are not good.
        • Big storage vendors are expected to charge less for maintenance now? Where, What, Huh?

          Seriously, it's not like an enterprise disk array owner can just stop over at Best Buy, pick up a new drive and pop it in whenever he feels like it.

          Sure the price of DISKS will go down but you know the cost of having some monkey stop by the data center to replace a failed DMX drive isn't going anywhere.

          Supposedly the maintenance from Xiotech is going to be $1 on these things. Gimmick, sure, but in theory that's where the
    • by ErMaC (131019)
      You don't need to replace a module, because it doesn't break. See the failure rates/service event numbers from their presentations.

      People are so used to disks failing. Disks shouldn't fail as often as they do, and most of the time they don't fail at all - the storage controller is at fault because the drive and the controller have such a limited language (SCSI) to talk to each other with. ISEs do away with this limitation.
      • by MadMorf (118601)
        You don't need to replace a module, because it doesn't break. See the failure rates/service event numbers from their presentations.

        Lab failure rates mean very little.

        Didn't Google just blow the lid off of the disk manufacturers MTBF numbers, by reporting their own failure rates as being an order of magnitude higher?

        Wait till they have a few thousands of these deployed. They we'll kknow how good they really are...
        And, that's when companies with big money to spend will take notice.
      • by Spazmania (174582) on Tuesday April 08, 2008 @03:38PM (#23005026) Homepage
        Thing is, I spent the last couple years playing this game. I started with a dozen 36-gig scsi disks that had bad sectors on them. I did thorough tests abandoned the whole gigabyte where the bad sectors were found and software-raid-5'd partitions from multiple drives, skipping those bad parts.

        Guess what? It didn't work out. The bad zones spread and they spread faster than the the raid software could detect the new failure and rebuild onto the spare.

        I quite enjoyed the experiment, but these were on my home servers. I wouldn't dream of doing this in a production environment. When the raid controller kicks the drive for -any- reason, it's back to the manufacturer for warranty replacement. The data is far to valuable to play games with it.
        • by Guspaz (556486)
          You'd detect the failure the instant you tried to read the sector, and could speed up this process by using idle time to check sectors for corruption (checksums help). With double parity, you'd have to have the same stripe go bad on three different disks to lose data, and I have trouble believing that the failures would spread that fast.

          I mean, I still wouldn't trust it in a production environment, as you said, I just wonder how a more dynamic system (RAID-Z, for example) would handle the same scenario. At
    • Re: (Score:3, Interesting)

      by tppublic (899574)
      Honestly, there isn't much cash flow to disrupt. This isn't EMC, HP, IBM or Hitachi.

      The purpose of this product isn't to penetrate large data centers... of if Xiotech thinks it is, then they need new marketing employees (and quickly). Large data centers HAVE the expertise on site to do individual disk replacements, and those large enterprise data centers will demand the feature sets that exist in the much larger equipment from the larger vendors named above.

      This is targeted at much smaller data center

  • So if San is the technology, the drive that implements it would be called Chihiro, right?

    Oh, that was Sen [nausicaa.net] . My bad, sorry.

    (Well, it makes as much sense as anything. It's not like I'm going to bother reading TFA when it's clearly marked "hype".)
  • I don't see anything particularly "disruptive" about this. Lots of storage systems are "self healing" and based on hot-swappable elements.

    The whole thing sounds like astroturfing.
  • by 93 Escort Wagon (326346) on Tuesday April 08, 2008 @02:14PM (#23003992)
    Why do people keep referring to incremental improvements to existing technology as "disruptive"? It's pretty obvious people don't understand the phrase "disruptive technology".

    My favorite misuse was when a marketing droid referred to Intel moving from a .65nm fab to .45nm as "disruptive". It's not just marketing folks, however - I've heard engineers and even my own college professors (usually if they're trying to turn their research into something commercially advantageous) do this.
    • by melted (227442) on Tuesday April 08, 2008 @02:56PM (#23004478) Homepage
      >> referred to Intel moving from a .65nm fab to .45nm as "disruptive"

      Disrupted AMD pretty good, from what I can see.
    • Its pretty obvious you don't understand how resistant markets are to change. Anything new and not garbage that escapes past the board of directors is shocking and disruptive.
       
      Sorry about the pessimism(its exam week)
    • It's pretty obvious people don't understand the phrase "disruptive technology".

      Hey!

      - If Microsoft's marketing department can redefine "Wizard" from "Human computer expert acknowledged as exceptionally skilled by his peers" to "only moderately brain-damaged menu-driven installation/configuration tool",

      - why can't Xiotech's marketing department redefine "disruptive technology" from "quantum leap in price/performance ratio of a competing technology based on a massively different architecture th
    • it's 65 and 45nM or .065 and .045uM
      pedantic, I know...
      but .45nM would be phenomenally disruptive as it would literally be two orders of magnitude better litho than what is currently attainable commercially.

      -nB
      • it's 65 and 45nM or .065 and .045uM
        pedantic, I know...

        Not nearly pedantic enough, unless you knew Herr Meter personally. Tell me, Herr Meter, why were you named that way? What did you discover to make yourself famous? And why did you say that Ångström deserved what he got?

        Funny, I was thinking on the way home that the intergalactic subway machine in Contact blew up because the alien schematic contained a typo calling for 1 eV, and the people building it failed to read it as 1 exavolt.

        Wikipedia tells me that 1 EeV/c = 1.783×1018 kg Really?

        • by epine (68316)
          Whoops, Slashcode consumed the second dimension and more. That's probably where the alien error originated in the first place. Morse coded the schematic onto a six sided cube without first pressing submit. Cocky bastards.

  • Compellent (Score:4, Informative)

    by Anonymous Coward on Tuesday April 08, 2008 @02:14PM (#23003994)
    I'm not sure why this announcement is really news...somewhat interesting though, is that many of the founders and former employees of Xiotech have left to start a company called Compellent http://www.compellent.com/ [compellent.com]. Compellent's disk technology, imo, is a lot slicker than Xiotech's, particularly their "automated tiered storage".
    • Re: (Score:3, Interesting)

      I admin a compellent san, and i find their "automated tiered storage" to be great in concept, marketed superbly, yet highly expensive and highly lacking in configurability.

      If you want data automatically moved down to a slower tier, but it gets touched just once a day. Good luck in getting it to move down automatically.

      I anxiously await the day when the SAN market is acknowledged as the scam it is (a glorified raid controller), and the various SAN companies die off in droves or become an everyday appliance
      • The idea that decisions can about what disk/RAID class is best can be made by the HW on a block by block basis is very slick.

        We didn't shell out the $s for the licenses because the old model (i.e. my databases are RAID-10, my file servers are RAID-5, etc) works "good enough" when compared to the sticker of the automated tiered storage licenses.
      • If you want data automatically moved down to a slower tier, but it gets touched just once a day.

        Data progression does the moving. DP only runs once a day by default, but you can change this schedule. You can also kick DP off manually. How? Ask Co-pilot.

        J Wolfgang Goerlich

      • by Lxy (80823)
        I wouldn't call a SAN a "scam". You get what you pay for.

        Want a big box full of disk, fully redundant (RAID 1,5,5+1,10, etc)? What it cheap? Got spare parts? Then, my friend, FreeNAS is for you. A homebrewed SAN that delivers enterprise capable performance for practically nothing.

        Oh, you want FAST disk? OK, then you have to shell out for SAS or FC disk. Can still use FreeNAS, but now your hardware costs have gone up.

        Your box has multile SAS controllers, multiple SAS drives, and now what? Gotta go ex
    • by Lxy (80823)
      Considering that Compellent and Xiotech both run Seagate disks, how is the "disk technology" better in Compellent? Keep in mind that Compellent is merely a consumer of Seagate disks, Xiotech now owns a piece of Seagate that makes this new SAN do what it does.

      Also, doesn't Xioetch do automated tiers of storage (ILM or some weird acronym)?

      Btw, I don't work for either of these companies, but I have evaluated both products extensively.
  • by Anonymous Coward
    On the Xiotech site:

    "#1 Lowest cost per disk IOPS
    #1 Lowest cost per MB/sec"

    Looking around, I don't see any quoted prices on the page.

    It's funny how it's always a project in itself to find the price tag for products. When companies run on "the bottom line" why are they so reluctant to tell us what the consumer's "bottom line" is straight forward and upfront?

    It should become law; that to advertise a product, you must post clearly what the price tag (range) is either at the top or bottom. Especially if you are
    • by sarabob (544622)
      There's a link in the white paper to the benchmarks (http://www.storageperformance.org/results/benchmark_results_spc1#a00064), which then gives you pricing info on the tested configurations.

      A 1TB array with 40, 15k 2.5" drives in raid 1 is $36,500 (list price is $61k, the price used by the spc has a 40% discount!) with a three-year, 24/7 4hr maintenance contract. It generates 8,720.12 SPC-1 IOPS, making it $4.19/IOP

      The other tested config used 20 146GB drives to get 5,800 IOPS for $21k, $3.53/IOP.

      (a 12TB ne
  • by SwashbucklingCowboy (727629) on Tuesday April 08, 2008 @02:54PM (#23004466)
    Well, no. It's an array that can practically heal itself (at least in theory). BIG difference...
  • Each datapac in the ISE presents a gas-gauge-like monitor showing you how much performance is being used [...]

    If you can use your gas gauge to measure how fast you're going, you're probably driving too fast.
  • Nothing really new here, except the box is sealed, which means when they have bought a batch of disks with an undiscovered flaw, there's no way to fix or replace them...

    Seagate Tomcats anyone?

    Also, would you trust your enterprise storage to laptop drives? Running 24/7/36...

    How long will those last?

    Hell, most SATA disks are unsuitable for anything but nearline storage, and even then, they're iffy...Keep plenty of spares!
    • by Phishcast (673016)
      Well, saying laptop drives isn't really fair. They've been making 10,000 RPM serial attached SCSI disks in a 2.5" form factor for quite some time, I know Sun uses them in servers. I'm not sure if there are 15,000 RPM disks out yet in this size or not. These are not your 5400RPM laptop drives.
      • by MadMorf (118601)

        Well, saying laptop drives isn't really fair. They've been making 10,000 RPM serial attached SCSI disks in a 2.5" form factor for quite some time, I know Sun uses them in servers. I'm not sure if there are 15,000 RPM disks out yet in this size or not. These are not your 5400RPM laptop drives.
        Ok, fair enough.

        Still, I personally don't like "black box" systems...
      • by afidel (530433)
        You can definitly get the SAS drives in 15K, I use them for my Citrix blade servers (HP BL460C). During heavy login periods the 15K is a must, especially since the blade only allows two spindles.
  • In the broadcast engineering space, we see a lot of this kind of thing...

    Avid Unity ISIS [avid.com]
    Omneon MediaGrid [omneon.com]
    DataDirect S2A [datadirectnet.com]

    • by csoto (220540)
      I am tempted to mod you down just for MENTIONING the damn Unity thing! :b
      • by TheSync (5291) *
        I am tempted to mod you down just for MENTIONING the damn Unity thing! :b

        Hah, I'd totally deserve it!
  • This is so arcane. It's like sitting on Tasman Dr. watching Net Appliance buy up lot after lot & VA Linux just announced a new thing called a build to order NAS. NAS is the future again. Buy Excite.com!

... when fits of creativity run strong, more than one programmer or writer has been known to abandon the desktop for the more spacious floor. -- Fred Brooks

Working...