
Xiotech Unveils Disruptive Storage Technology 145
Lxy writes "After Xiotech purchased Seagate's Advanced Storage Architecture division, rumors circulated around what they were planning to do with their next-generation SAN. Today at Storage Network World, Xiotech answered the question. The result is quite impressive, a SAN that can practically heal itself, as well as prevent common failures. There's already hype in the media, with much more to come. The official announcement is on Xiotech's site."
Looks like it's next-gen primary disk... (Score:2)
Re:Looks like it's next-gen primary disk... (Score:5, Informative)
Re: (Score:2)
Because of the overhead of designing such a system? I'm a pretty storage savy guy and I would'nt have a clue where to start home building the system you describe. Until there's a well designed "Live CD" type install that nets a simple to use appliance type interface, this is not a viable alternative for most shops.
That said, this this seems liek a nightmare. all the drives are sealed into "Drive Pa
Re: (Score:2)
www.freenas.org
HTH. HAND.
Re: (Score:2)
That said, the next reply seemingly IS, except for the 1TB limitation of the "free" version.
But thanks for pointing those out, I'v
Re: (Score:2)
I've played around a bit with OpenFiler in a VM with the NAS and it was about what I expected for a web interface over Samba and NFS. I'm a bit more interested in the iSCSI functionality to serve as
Re: (Score:2)
I've worked with Veritas's VM, IBM's VM, Linux's VM, and Linux's MD, I'm quite confident they are NOT the same. Perhaps they all provide redundancy, the the LVM's all slice and dice the disks in a similar manner (Multi-Disk is a fundamentally different product), but the Linux versions had a fair bit of catchup to do to get close to the commercial products in terms of both usability and functionality.
I've got 7 years experience managing SAN products from EMC
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
Next generation hardware that is patent encumbered and will require a lawyer and several court proceedings for anyone and everyone to get their data back.
I mean come on, when is the industry going to figure out we do not need proprietary, closed storage solutions that are a rehash of the old IBM AS/400 days when you could only buy super expensive IBM gear.
No thanks I will take my open code and commodity hardware and build solutions that will kick this patented solutions arse at 1
Re: (Score:2)
Re: (Score:2)
100 or so engineers involved in the project have replicated Seagate's own processes for drive telemetry monitoring and error detection -- and drive re-manufacturing -- in firmware on the Linux-based ISE. ISE automatically performs preventive and remedial processes. It can reset disks, power cycle disks, implement head-sparing operations, recalibrate and optimize servos and heads, perform reformats on operating drives, and rewrite entire media surfaces if needed. Everything that Seagate would do if you returned a drive for service.
My software RAID definitely doesn't do that.
Re: (Score:2)
As to prevention, Debain, e.g., runs a RAID check every 30 days by default. I add a long SMART selftest every 14 d
Unclarity (Score:2, Interesting)
These are just some of the questions popping into my head:
What is SAN?
What does it do?
How is it disruptive?
Who does it disrupt?
What does it store?
Can't say skimming through TFA makes it a lot clearer either.
Also, two obscure articles is media buzz?
Re: (Score:2)
Re: (Score:1)
Re:Unclarity (Score:5, Funny)
Re: (Score:2)
(* This
Re:Unclarity (Score:5, Informative)
What does it do?
How is it disruptive?
Who does it disrupt?
What does it store?
It's remote storage.
Their new tech saves you the trouble of swapping HDs.
It disrupts the people offering maintanence contracts.
It stores whatever you want.
http://www.xiotech.com/images/Reliability-Pyramid.gif [xiotech.com]
My question:
What is "Failing only one surface"
Re:Unclarity (Score:5, Funny)
Re: (Score:2)
Failing only one surface (was: Unclarity) (Score:3, Informative)
What is "Failing only one surface"
A hard drive can fail in many ways: sector, track, platter, head. ISE can fail just the one surface -- say, a platter -- and keep writing to the remaining device. The broken platter is removed from service while the remaining disk storage continues to be used until end of life.
This is all done automatically and transparently. What they are trying to eliminate is the time it takes for someone to physically swap out a disk.
J Wolfgang Goerlich
Re: (Score:2)
Currently, in a disk if one chunk of a surface has a problem, the whole disk is bad. The disk has no way to communicate which part has died.
Xiotech's hooks into the firmware allow it to write around bad areas on the surface of a disk, and when a portion of a surface does fail it only has to rebuild that portion, rather than the entire disk drive.
Re:Unclarity (Score:4, Interesting)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
Re:Unclarity (Score:5, Informative)
While you can sometimes do some neat tricks with backups and a good SAN infrastructure, it's by no means its primary purpose in life.
Course you can do all that with a LAN too. (Score:2)
Re:Unclarity (Score:5, Informative)
It *is* a network like your ethernet network (with switches, adaptors, etc), but usually its a FC (fibre channel) rather than ethernet. You use a SAN to put your servers disks in another box to the server.
But why would I do that? heat, consolidation, redundancy.
A typical setup is to have a few 1u or 2u (rack heights are measured in u, which iirc is about 2") servers attached to a 3u storage controller.
This is a box with lots (typically 14 in a 3u box) of drives. There will be a small computer controller in there too as well as some raid chips.
Typically in a 14 drive box you might configure it as a pair of 5+1 raid 5 arrays and a couple of hot spares (5+1 means 5 drives of data and one parity drive). Effectively your 6 drives appear as one with 5x the capacity of 1 of the component drives. You can survive the loss of one drive without losing data. If you do have a drive go offline, the controller should transparantly start rebuilding the failed disk on one of the hot spares (and presumably raise a note via email or snmp that it needs a new disk).
The controller is then configured to present these arrays (called volumes in storage speak) to specific servers (called hosts).
The host will see each array as a single drive (/dev/sdX) that it uses as per normal, oblivious to the fact that its in a different box.
Now to revisit the why we do this:
1. heat - by putting all the hot bits (drives) together we can concentrate where the cooling goes
2. reliability - any server using the above setup can have a disk fail and it simply won't notice. With the hot spare setup, you can potentially lose several drives between maintainance (as long as they don't happen at once).
3. cost - you can buy bigger drives, then partition your array into smaller volumes (just like you partition your desktop machine's drive) and give different chunks to different hosts, reducing per GB cost (which when you are potentially talking about tera and peta bytes worth of disk space is rather important).
as for what these guys are up to, I've not had a chance to look yet. I might post back.
Re: (Score:2)
Re:Unclarity (Score:5, Informative)
This controller is a sealed unit (read: better heat/vibration support, but not a user servicable component) with excess disks inside (multiple hot-spares, so even if several drives fail over time it keeps going), combined with the knowledge san techs across the globe know: most errors are transient, and if you pull the disk out and stick it back in, it will probably work again. They have just built a controller that does that for you automatically. Definately on the evolution rather than revolution side of things, and I have to admit I fail to see the disruption here, although I could well be missing something ( the whitepaper is somewhat light on details shall we say ).
Re: (Score:3, Interesting)
better heat/vibration support, but not a user servicable component
Heat is key here. Have you ever stood next to a petabyte of storage? Or even a few terabytes? Most Sans kick off a lot of heat from all those disks. When looking San to Hvac, 1 TB to 1 ton is typical.
Xiotech's ISE mounts the disks on a very large aluminum alloy heat sink. The heat is wicked away from the drives. This makes for better heat dissipation and less heat on the disks, thus improving cooling and extending lifespan.
Xiotech had
You MUST be new to storage technology. (Score:2)
The disruptive part is that it seems to be much more reliable which would mean that you can wave the tech goodbye for a while, instead of having to lose access to a sting of drives RAIDed together while they have to rebuild a drive which failed and needed replacement.
Think of running XFS without having to worry about the drives' physical reliability because they're really reliable. (If you've got 5PB online it usually "which drive just failed"
Re: (Score:2)
I'm not going to educate you except to tell you to Google for it. The disruptive part is that it seems to be much more reliable which would mean that you can wave the tech goodbye for a while, instead of having to lose access to a sting of drives RAIDed together while they have to rebuild a drive which failed and needed replacement.
Umm...I think you forgot what the "R" in RAID stands for. You may have somewhat degraded performance during a rebuild when you spare in for a drive which has failed, but you don't lose access to any data because of a single disk failure (Save for RAID 0, which isn't really RAID to begin with).
I wouldn't call this disruptive. It sounds like they've done some smart things to bring disks back to life when other hardware would call them failed, but you can bet that they're packaging more spares in these n
Re: (Score:2)
Re: (Score:2)
As a Mass Storage man for HP, I was hoping to see something about cool Storage Area Network technology. Self-healing and fixing SAN's would indeed be a cool thing, because the way I see it I get more questions on the Infrastructure than the actual boxes in the SAN.
If you look at current offerings from a couple of the major vendors, you'll see that there are boxes that are already guaranteeing 100% uptime and have all the redundancies and diagnostics built in to actually deliver the goods.
The
Re: (Score:2)
Disruptive? (Score:4, Insightful)
Maybe I'm missing something. I read their announcement and one of the articles on this new product. As near as I can tell they're selling SAN systems where instead of plugging in individual drives, you plug in a box with two drives in it. They paired this with some nice software for working around failed sectors and rewriting correctable drive problems. I guess I'm just not all that impressed. Is this really "disruptive" technology? It looks like evolutionary improvements and some nice automation to take some of the grunt work out of managing SAN.
I'm, admittedly, not an expert on network storage. So what do people think? Is this really the best thing since sliced bread or just another slashvertisement someone hyped to sound like news for nerds and rehashing a lot of marketing weasel words?
Re: (Score:2)
Re: (Score:3, Interesting)
I would call that a great thing. I've never understood why I couldn't just have a bank of a dozen drives with another 10 empty slots, and have it move data around automatically to increase performance and maintain redundancy. When enough data is stored or enough drives break that I'm close to losing redundancy, a light turns on, and I pop
Re: (Score:3, Interesting)
One reason I can think of is because there is a high correlation of drive failures to the power supply and equipment that it'
Re: (Score:2)
The SA
Re: (Score:3, Informative)
You would think the idea would be to chuck in drives (with some minimum, like 8 or 12) and have the physical data storage be totally abstracted from the user, with N+2 redundancy and hotspare functionality totally guaranteed, and then allow the user to create LUNs without concern for underlying physical storage.
When you need more space, you add more drives and the system manages the striping, re-striping as necessary for optimum throughput and maximum redundancy, rebuilding failed drives as necessary.
There are systems out there that do this sort of thing, but they're *expensive*.
Take a look at HP's EVA line. They're really quite good at this.
I'd be careful about using the terms "optimum" and "maximum" in that last paragraph, but they get quite close to that mark.
Other vendors have equipment that performs about as well...IMO, the HP EVA line is the best at it, however.
Jeff (only affiliated with HP as a mostly happy customer)
Re: (Score:2)
It's nice having the ability to abstract things completely when performance isn't paramount, but when those performance bottlenecks start to become an issue, it's nice to remove the abstraction and start becoming more specific about how things interact.
As a for instance, I
Re: (Score:2)
Re: (Score:2)
This exists, and has for a while (Score:3, Informative)
HP (EVA)
3Par
Dell/Equallogic
Compellent
Pillar
HDS (USP)
I'd be shocked if Xiotech doesn't do this today.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
The downside is it only has 4 bays and connects via USB 2.
Re: (Score:2)
If, for that price (or less, those things are really overpriced), that had onboard GigE and fixed their performance problems, it'd probably be a decent deal.
Re: (Score:2)
But if someone could convince them to put their technology to use in a professional storage array instead of a consumer-level RAID For Dummies...
Re:Disruptive? (Score:5, Informative)
The second article describes this very well. One big extra is that this system can perform all of the standard drive-repair operations that typically only OEMs can. This helps to keep you from replacing drives that aren't bad, but had a hiccup.
It's also not just two drives in an ISE, but more like 10-20 (3.5" and 2.5" respectively) with a bunch of Linux software to give each ISE a pretty robust feature-set in itself. Then they also up the block size to 520 bytes, leaving space for data validity checks in order to keep the silent corruption problem from sneaking into the system.
In the end, it's probably not wholly revolutionary. It does seem like an evolutionary jump though; with great performance, great feature set, and a very well thought out system that brings new technology and ideas to bear.
Evolutionary, but on multiple fronts. (Score:2)
They've made those OEM services on the device AUTOMATICALLY kick in. (evolution step 2)
They've sealed the units. (evolution)
Which, in effect, means that most of the SAN expertise that FORMERLY required an experienced tech is now incorporated and these SAN's can be installed and "maintained" by less technically skilled personnel.
Which will make these devices VERY easy to sell. You pay ONCE for the tech and save on the cost of the technician's sa
Re: (Score:2)
we shall see if they are affordable
Sweet... (Score:3, Funny)
We have a Xiotech Magnitude that we paid ~$150K for in 2003 that is sitting around like a giant paper weight. Any takers? $3,000? $2,000? going once... going twice...
Re: (Score:2, Interesting)
Re: (Score:2)
HAHAHA
My biggest bitch was that the $150K solution consisted of $37K of HW and the rest was software licenses and/or support that doesn't seem to be transferable. This basically means that there is no secondary market for the devices because anyone who would buy one would need to buy new software licenses. Since the SW licenses are more valuable than the HW, it wouldn't make sense to buy used HW. "nice"....
The above weighed in heavily in our decis
Re: (Score:2)
Move along, nothing to see here (Score:5, Insightful)
The only thing this is likely to disrupt is Xiotech's cashflow.
Re: (Score:2)
If (another big IF) the unit keeps soft-failed drives (which weren't really bad to begin with) in play longer because it can recover them from *burps* in the system, then it's entirely possible that the unit could potentially be a money-saver.
Re: (Score:2)
Re: (Score:2)
Seriously, it's not like an enterprise disk array owner can just stop over at Best Buy, pick up a new drive and pop it in whenever he feels like it.
Sure the price of DISKS will go down but you know the cost of having some monkey stop by the data center to replace a failed DMX drive isn't going anywhere.
Supposedly the maintenance from Xiotech is going to be $1 on these things. Gimmick, sure, but in theory that's where the
Re: (Score:2)
People are so used to disks failing. Disks shouldn't fail as often as they do, and most of the time they don't fail at all - the storage controller is at fault because the drive and the controller have such a limited language (SCSI) to talk to each other with. ISEs do away with this limitation.
Re: (Score:2)
Lab failure rates mean very little.
Didn't Google just blow the lid off of the disk manufacturers MTBF numbers, by reporting their own failure rates as being an order of magnitude higher?
Wait till they have a few thousands of these deployed. They we'll kknow how good they really are...
And, that's when companies with big money to spend will take notice.
Re:Move along, nothing to see here (Score:4, Interesting)
Guess what? It didn't work out. The bad zones spread and they spread faster than the the raid software could detect the new failure and rebuild onto the spare.
I quite enjoyed the experiment, but these were on my home servers. I wouldn't dream of doing this in a production environment. When the raid controller kicks the drive for -any- reason, it's back to the manufacturer for warranty replacement. The data is far to valuable to play games with it.
Re: (Score:2)
I mean, I still wouldn't trust it in a production environment, as you said, I just wonder how a more dynamic system (RAID-Z, for example) would handle the same scenario. At
Re: (Score:3, Interesting)
The purpose of this product isn't to penetrate large data centers... of if Xiotech thinks it is, then they need new marketing employees (and quickly). Large data centers HAVE the expertise on site to do individual disk replacements, and those large enterprise data centers will demand the feature sets that exist in the much larger equipment from the larger vendors named above.
This is targeted at much smaller data center
Looking forward to my Chihiro drive! (Score:1, Offtopic)
Oh, that was Sen [nausicaa.net] . My bad, sorry.
(Well, it makes as much sense as anything. It's not like I'm going to bother reading TFA when it's clearly marked "hype".)
astroturfing (Score:2)
The whole thing sounds like astroturfing.
Re: (Score:3, Funny)
Re: (Score:2)
How is hot-swapping a drive self healing?
It isn't. Who said it was?
The company claims two innovations: self healing and rapid replacement. Both are commonplace.
Clear now?
Tired of overused buzzwords (Score:5, Interesting)
My favorite misuse was when a marketing droid referred to Intel moving from a
Re:Tired of overused buzzwords (Score:4, Funny)
Disrupted AMD pretty good, from what I can see.
Re: (Score:2)
Sorry about the pessimism(its exam week)
Just marketing redefining words. (Score:2)
Hey!
- If Microsoft's marketing department can redefine "Wizard" from "Human computer expert acknowledged as exceptionally skilled by his peers" to "only moderately brain-damaged menu-driven installation/configuration tool",
- why can't Xiotech's marketing department redefine "disruptive technology" from "quantum leap in price/performance ratio of a competing technology based on a massively different architecture th
Re: (Score:2)
pedantic, I know...
but
-nB
Monty Pedantic (Score:2)
it's 65 and 45nM or .065 and .045uM
pedantic, I know...
Not nearly pedantic enough, unless you knew Herr Meter personally. Tell me, Herr Meter, why were you named that way? What did you discover to make yourself famous? And why did you say that Ångström deserved what he got?
Funny, I was thinking on the way home that the intergalactic subway machine in Contact blew up because the alien schematic contained a typo calling for 1 eV, and the people building it failed to read it as 1 exavolt.
Wikipedia tells me that 1 EeV/c = 1.783×1018 kg Really?
Re: (Score:2)
Compellent (Score:4, Informative)
Re: (Score:3, Interesting)
If you want data automatically moved down to a slower tier, but it gets touched just once a day. Good luck in getting it to move down automatically.
I anxiously await the day when the SAN market is acknowledged as the scam it is (a glorified raid controller), and the various SAN companies die off in droves or become an everyday appliance
Re: (Score:2)
We didn't shell out the $s for the licenses because the old model (i.e. my databases are RAID-10, my file servers are RAID-5, etc) works "good enough" when compared to the sticker of the automated tiered storage licenses.
Re: (Score:2)
If you want data automatically moved down to a slower tier, but it gets touched just once a day.
Data progression does the moving. DP only runs once a day by default, but you can change this schedule. You can also kick DP off manually. How? Ask Co-pilot.
J Wolfgang Goerlich
Re: (Score:2)
Want a big box full of disk, fully redundant (RAID 1,5,5+1,10, etc)? What it cheap? Got spare parts? Then, my friend, FreeNAS is for you. A homebrewed SAN that delivers enterprise capable performance for practically nothing.
Oh, you want FAST disk? OK, then you have to shell out for SAS or FC disk. Can still use FreeNAS, but now your hardware costs have gone up.
Your box has multile SAS controllers, multiple SAS drives, and now what? Gotta go ex
Re: (Score:2)
Also, doesn't Xioetch do automated tiers of storage (ILM or some weird acronym)?
Btw, I don't work for either of these companies, but I have evaluated both products extensively.
Hypocritical Reluctance (Score:2, Insightful)
"#1 Lowest cost per disk IOPS
#1 Lowest cost per MB/sec"
Looking around, I don't see any quoted prices on the page.
It's funny how it's always a project in itself to find the price tag for products. When companies run on "the bottom line" why are they so reluctant to tell us what the consumer's "bottom line" is straight forward and upfront?
It should become law; that to advertise a product, you must post clearly what the price tag (range) is either at the top or bottom. Especially if you are
Re: (Score:2)
A 1TB array with 40, 15k 2.5" drives in raid 1 is $36,500 (list price is $61k, the price used by the spc has a 40% discount!) with a three-year, 24/7 4hr maintenance contract. It generates 8,720.12 SPC-1 IOPS, making it $4.19/IOP
The other tested config used 20 146GB drives to get 5,800 IOPS for $21k, $3.53/IOP.
(a 12TB ne
"a SAN that can practically heal itself" (Score:3, Insightful)
Wrong metaphor (Score:2)
If you can use your gas gauge to measure how fast you're going, you're probably driving too fast.
Nothing to see here, move along... (Score:2)
Seagate Tomcats anyone?
Also, would you trust your enterprise storage to laptop drives? Running 24/7/36...
How long will those last?
Hell, most SATA disks are unsuitable for anything but nearline storage, and even then, they're iffy...Keep plenty of spares!
Re: (Score:2)
Re: (Score:2)
Still, I personally don't like "black box" systems...
Re: (Score:2)
Other things along these lines (Score:2)
Avid Unity ISIS [avid.com]
Omneon MediaGrid [omneon.com]
DataDirect S2A [datadirectnet.com]
Re: (Score:2)
Re: (Score:2)
Hah, I'd totally deserve it!
NAS is big again (Score:2)
Re: (Score:2)
They speak great management buzzword. But not tech - buzzword or otherwise.