Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage

Xiotech Unveils Disruptive Storage Technology 145

Lxy writes "After Xiotech purchased Seagate's Advanced Storage Architecture division, rumors circulated around what they were planning to do with their next-generation SAN. Today at Storage Network World, Xiotech answered the question. The result is quite impressive, a SAN that can practically heal itself, as well as prevent common failures. There's already hype in the media, with much more to come. The official announcement is on Xiotech's site."
This discussion has been archived. No new comments can be posted.

Xiotech Unveils Disruptive Storage Technology

Comments Filter:
  • Re:Unclarity (Score:5, Informative)

    by TubeSteak ( 669689 ) on Tuesday April 08, 2008 @02:51PM (#23003714) Journal

    What is SAN?
    What does it do?
    How is it disruptive?
    Who does it disrupt?
    What does it store?
    http://en.wikipedia.org/wiki/Storage_area_network [wikipedia.org]
    It's remote storage.
    Their new tech saves you the trouble of swapping HDs.
    It disrupts the people offering maintanence contracts.
    It stores whatever you want.

    http://www.xiotech.com/images/Reliability-Pyramid.gif [xiotech.com]
    My question:
    What is "Failing only one surface"
  • Re:Unclarity (Score:3, Informative)

    by Xaedalus ( 1192463 ) <Xaedalys @ y a h o o .com> on Tuesday April 08, 2008 @02:54PM (#23003748)
    Assuming you're being literal with your confusion... A SAN is a Storage Area Network that organizations use to back-up up data off their main networks. A lay explanation: think of your normal network and how it's connected. A SAN network (usually composed of fibre channel or SCSI connections) underlays that existing standard network and moves all the data you want to back up to disk or tape, without eating up the bandwidth you have on your normal network. It's usually driven by a back-up server, or sometimes by a normal server (but only if you want to eat up your processing power). What this disrupts (if it's true) is how a SAN network monitors itself. It's basically pro-active monitoring and a different configuration of spinning disks. I'm not sure which RAID array they're using, so it may not be as 'revolutionary' as they're proclaiming it to be. Please note: any network admins, PLEASE feel free to correct me if I'm wrong (because there's nothing worse than giving a layman explanation that's inaccurate).
  • by maharg ( 182366 ) on Tuesday April 08, 2008 @02:56PM (#23003776) Homepage Journal
    not only self-diagnosis, but onboard disk remanufacturing ffs

    100 or so engineers involved in the project have replicated Seagate's own processes for drive telemetry monitoring and error detection -- and drive re-manufacturing -- in firmware on the Linux-based ISE. ISE automatically performs preventive and remedial processes. It can reset disks, power cycle disks, implement head-sparing operations, recalibrate and optimize servos and heads, perform reformats on operating drives, and rewrite entire media surfaces if needed. Everything that Seagate would do if you returned a drive for service.
  • Re:Unclarity (Score:5, Informative)

    by Anonymous Coward on Tuesday April 08, 2008 @02:59PM (#23003820)
    Just to clarify, SANs generally aren't used primarily for backups - they're just used for server storage to have a centralized and more thoroughly redundant setup than local disks (i.e. put your 40TB document repository on the SAN and connect it over fiber to the server, or have your server boot off the SAN instead of local disks, etc). They're treated like local disks by the OS, but are in a different location connected via fiber or nowadays via iscsi.

    While you can sometimes do some neat tricks with backups and a good SAN infrastructure, it's by no means its primary purpose in life.
  • Re:Disruptive? (Score:5, Informative)

    by kaiser423 ( 828989 ) on Tuesday April 08, 2008 @02:59PM (#23003826)
    well, RTFA. For mod points, it's disruptive because it runs Linux!

    The second article describes this very well. One big extra is that this system can perform all of the standard drive-repair operations that typically only OEMs can. This helps to keep you from replacing drives that aren't bad, but had a hiccup.

    It's also not just two drives in an ISE, but more like 10-20 (3.5" and 2.5" respectively) with a bunch of Linux software to give each ISE a pretty robust feature-set in itself. Then they also up the block size to 520 bytes, leaving space for data validity checks in order to keep the silent corruption problem from sneaking into the system.

    In the end, it's probably not wholly revolutionary. It does seem like an evolutionary jump though; with great performance, great feature set, and a very well thought out system that brings new technology and ideas to bear.
  • Re:Unclarity (Score:5, Informative)

    by DJProtoss ( 589443 ) on Tuesday April 08, 2008 @03:10PM (#23003944)
    You could use it for that, but thats not the main use.
    It *is* a network like your ethernet network (with switches, adaptors, etc), but usually its a FC (fibre channel) rather than ethernet. You use a SAN to put your servers disks in another box to the server.
    But why would I do that? heat, consolidation, redundancy.
    A typical setup is to have a few 1u or 2u (rack heights are measured in u, which iirc is about 2") servers attached to a 3u storage controller.
    This is a box with lots (typically 14 in a 3u box) of drives. There will be a small computer controller in there too as well as some raid chips.
    Typically in a 14 drive box you might configure it as a pair of 5+1 raid 5 arrays and a couple of hot spares (5+1 means 5 drives of data and one parity drive). Effectively your 6 drives appear as one with 5x the capacity of 1 of the component drives. You can survive the loss of one drive without losing data. If you do have a drive go offline, the controller should transparantly start rebuilding the failed disk on one of the hot spares (and presumably raise a note via email or snmp that it needs a new disk).
    The controller is then configured to present these arrays (called volumes in storage speak) to specific servers (called hosts).
    The host will see each array as a single drive (/dev/sdX) that it uses as per normal, oblivious to the fact that its in a different box.
    Now to revisit the why we do this:
    1. heat - by putting all the hot bits (drives) together we can concentrate where the cooling goes
    2. reliability - any server using the above setup can have a disk fail and it simply won't notice. With the hot spare setup, you can potentially lose several drives between maintainance (as long as they don't happen at once).
    3. cost - you can buy bigger drives, then partition your array into smaller volumes (just like you partition your desktop machine's drive) and give different chunks to different hosts, reducing per GB cost (which when you are potentially talking about tera and peta bytes worth of disk space is rather important).
    as for what these guys are up to, I've not had a chance to look yet. I might post back.
  • Compellent (Score:4, Informative)

    by Anonymous Coward on Tuesday April 08, 2008 @03:14PM (#23003994)
    I'm not sure why this announcement is really news...somewhat interesting though, is that many of the founders and former employees of Xiotech have left to start a company called Compellent http://www.compellent.com/ [compellent.com]. Compellent's disk technology, imo, is a lot slicker than Xiotech's, particularly their "automated tiered storage".
  • Re:Unclarity (Score:5, Informative)

    by DJProtoss ( 589443 ) on Tuesday April 08, 2008 @03:21PM (#23004072)
    Ok, Have now rtfa'd. Basically, they have built a shiny controller/enclosure (the enclosure is the frame that contains the drives and the controller the circuit that interfaces, although to be confusing controllers often are built into enclosures (especially on the lower end) and still referred to as a controller)
    This controller is a sealed unit (read: better heat/vibration support, but not a user servicable component) with excess disks inside (multiple hot-spares, so even if several drives fail over time it keeps going), combined with the knowledge san techs across the globe know: most errors are transient, and if you pull the disk out and stick it back in, it will probably work again. They have just built a controller that does that for you automatically. Definately on the evolution rather than revolution side of things, and I have to admit I fail to see the disruption here, although I could well be missing something ( the whitepaper is somewhat light on details shall we say ).
  • by Anonymous Coward on Tuesday April 08, 2008 @04:27PM (#23004886)
    The storage industry is notorious for trying to hide their price lists. Check out http://storagemojo.com/storagemojos-pricing-guide/ [storagemojo.com] for street prices on storage gear. It's not all up to date, but you can get a ballpark without requesting all sorts of quotes from a reseller.
  • Re:Disruptive? (Score:3, Informative)

    by igjeff ( 15314 ) on Tuesday April 08, 2008 @04:35PM (#23004988)

    You would think the idea would be to chuck in drives (with some minimum, like 8 or 12) and have the physical data storage be totally abstracted from the user, with N+2 redundancy and hotspare functionality totally guaranteed, and then allow the user to create LUNs without concern for underlying physical storage.

    When you need more space, you add more drives and the system manages the striping, re-striping as necessary for optimum throughput and maximum redundancy, rebuilding failed drives as necessary.
    There are systems out there that do this sort of thing, but they're *expensive*.

    Take a look at HP's EVA line. They're really quite good at this.

    I'd be careful about using the terms "optimum" and "maximum" in that last paragraph, but they get quite close to that mark.

    Other vendors have equipment that performs about as well...IMO, the HP EVA line is the best at it, however.

    Jeff (only affiliated with HP as a mostly happy customer)
  • by Phishcast ( 673016 ) on Tuesday April 08, 2008 @04:48PM (#23005146)
    Off the top of my head, all of the following companies have storage arrays which basically do exactly what you're asking for. When you create a LUN it's across all available spindles and data will re-balance across all available disks as you add more, all with RAID redundancy. I'm not sure about N+2 at this point, but RAID-6 is becoming ubiquitous in the storage industry.

    HP (EVA)
    3Par
    Dell/Equallogic
    Compellent
    Pillar
    HDS (USP)

    I'd be shocked if Xiotech doesn't do this today.

  • by jwgoerlich ( 661687 ) on Tuesday April 08, 2008 @05:12PM (#23005452) Homepage Journal

    What is "Failing only one surface"

    A hard drive can fail in many ways: sector, track, platter, head. ISE can fail just the one surface -- say, a platter -- and keep writing to the remaining device. The broken platter is removed from service while the remaining disk storage continues to be used until end of life.

    This is all done automatically and transparently. What they are trying to eliminate is the time it takes for someone to physically swap out a disk.

    J Wolfgang Goerlich

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...