Hard Disk Sector Consolidates Amid Uncertain Future 237
Hugh Pickens writes writes "The WSJ reports that Western Digital will buy Hitachi Global Storage Technologies for about $4.3 billion in cash and stock, leaving only four key hard disk drive vendors — Seagate, Western Digital, Toshiba and Samsung. The hard drive world has been seen as ripe for consolidation, particularly as the rise of tablet computers such as the Apple iPad — which don't use hard drives for data storage — is casting doubt on the future of hard disks. Compared to hard drives, solid-state drives promise greater power efficiency, performance, resistance to physical shock, and run more quietly since they contain no moving parts. But one area that solid-state drives do not improve on their spinning predecessors is in their inevitable movement towards failure. 'SSDs are going to fail just like hard drives will,' says Chris Bross, Senior Enterprise Recovery engineer at Drivesavers Data Recovery. 'Every storage device will have issues regardless of their underlying technology.'"
Ehh (Score:3, Informative)
This isn't all that different from when Seagate bought Maxtor [slashdot.org]. Back then, after the sale, Seagate controlled 44% of the market [arstechnica.com], compared to nearly 50 percent market share which this deal has bestowed upon Western Digital [wsj.com].
Re: (Score:2)
No harddrives in the future (Score:5, Funny)
There will be no hard drives because we'll just store all our data in the cloud. (ducks)
Re: (Score:2)
Mmmm, dark bits. I wonder how much data could be stored in latency. In other words, how much data you could store in saturating all the cables of the world before the data gets to where its going. Like a token ring network, you just have to wait for your data to come back to you. Might be waiting a while though.
Re: (Score:2)
I've always believed that some forms of future data storage / backup could take the shape of continuing broadcast of bits into space, to some satellite or space craft that beams it back to us. And back and forth.
Re: (Score:3)
Re: (Score:2)
What you describe is essentially one of the earliest forms of memory, delay lines. [wikipedia.org]
Back in the old days... (Score:3)
Re: (Score:2)
Since there was no reason you couldn't send email to yourself
Didn't some e-mail providers back then bill by the byte, by the hop, or both? Or was it unmetered?
Re: (Score:2)
Ignoring that...I suspect that the amount of data that can be in transit in the world might be affected by the memory capacity of the network switches involved. The network switches make it unclear what you can store in latency. AFAIK they start dropping packets
Re:No harddrives in the future (Score:5, Insightful)
You don't have to duck if you're an AC :]
But AC's point was that datacenters will still use alot of spinning disk until SSDs get a comparable $/byte ratio. Building a 100TB SAN array out of SSDs would run many times that of doing it with traditional spinning disk.
I'm not saying it won't happen, just saying we're probably 3-5 years away from it.
Re: (Score:2)
Unless you need IOPS on the order of magnitude of 100 times faster. Then you move to a hybrid model of some of the higher end Drive Array Chassis offer with features like Dynamic Allocation.
Re: (Score:2)
Re: (Score:3)
Uh...no, they don't. MTBFs are rated at 4-10 times most moving parts disks. They only have short lifespans if you're not implementing wear leveling (To whit, I will observe that there's only bulk flash these days that doesn't implement this...).
I honestly wish people would quit propagating falsehoods in this. Seriously.
SSD's and HD's have their domains where they excel at things. As Flash or something better gets cheaper, the SSD's will take over the problem sets that moving parts disks solve. Until t
Re: (Score:2)
And there's the rub: no one's done an exhaustive study to determine the real MTBFs of varying leveling algorithms-- and the latency involved given varying degrees of storage over delta-T.
There are several patents that cover the application of the algorithms, but no one's just pounded the living hell out of the drives over their operating ranges sufficiently to tell just how long a drive lasts until the junctions cough blood. Right now, and sadly, HDs are the devils we know. SSDs are often embedded, and ther
Re:No harddrives in the future (Score:4, Interesting)
Re: (Score:2)
Well for high performance and reliability you will still want to use RAID even with SSDs. If had a HA data-base server I would add an extra drive as a hot standby and then once every six months or so I would take the standby hot and then swap in a new SSD for the Hot Standby. Repeat for as long as the server is up.
Re: (Score:2)
Two drives for spare, with RAID6 (so you can lose two drives), so you're never at the mercy of one drive failing. More expensive for sure. But data loss/downtime is more expensive in most cases.
Re: (Score:2)
Re: (Score:2)
Hmm, depends on your app I guess. We were streaming LHC collider data from CERN at 40Gbps to Nexsan SAN/NAS boxes with arrays in RAID6 with spinning 7200RPM drives, and didn't have too much trouble. YMMV.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Normal RAID actually wears out drives faster when dealing with SSDs. You should talk to Violin Memory www.vmem.com and talk with them about some of the things they discovered.
Re: (Score:3)
Re: (Score:2)
Yeah, if you write to them all the time. If Netflix fills it's datacenters with SSDs, writes the video files to them once, and only does reads off of them to stream to customers, I'm sure those SSDs are going to last damn near forever.
Re:No harddrives in the future (Score:5, Funny)
I for one will not be entrusting my sensitive data to ducks, airborne or otherwise.
Re:No harddrives in the future (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
I know you were trying to be funny but, in a way, I think you are right. Hard drives may not cease to exist but they very well may disappear into the cloud. Hard disks have a lot of life left in them for server (i.e. "cloud") uses. But $/TB isn't all that important in notebooks or desktops. There, hard drive capacity is outstripping need and ssd are getting close to providing enough capacity at a reasonable price. What happens when hard drives disappear from Best Buy, Frys, etc and cease being a consu
Modern drives are *too* reliable?? (Score:5, Interesting)
For the end-user, it's great that the average lifespan of a drive is measured in years. For the manufacturers, not so good.
Since upgrading my power supplies I've had very few drive failures over the past five years. I've purchased drives to expand storage, but rarely to replace. Across 10 laptops I have replaced two failed drives in two years. On the desktops, with about twenty drives between 5 machines, I've replaced maybe two units in two years. These run continuously, are rarely rebooted, and have semi-annual reboots to replace fans and clean out the dust.
Future not so uncertain anymore (Score:5, Insightful)
After it seems clear the rewrite count is going to hell - 5000/cell for 32 nm, 3000/cell for 25 nm, SSDs are going to have a helluva time catching up in cost/GB. People will still want huge storage disks, data centers still need storage, hard disks aren't going away. The SSDs do rock for speed and is making huge performance gains but that doesn't bring the cost down. The combination of a blazing fast 100GB SSD and huge, slow 2TB HDD seems to be the way forward.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
But in the future it could seriously be the norm to just download on demand all that back content you didn't watch.
That future is still fairly far off.
First, you need a fast Internet connection. Even with a solid 2MB/sec dedicated to downloading video, it would take around 20 minutes to download an hour TV show (720p with a reasonable bitrate), and that's assuming the server will feed you data that fast. To be safe, you'd probably need at least 5 minutes of pre-buffering, and likely a lot more based on how often 360p YouTube videos pause to download more.
Second, you'd need a reliable source for the download that would
Re: (Score:2)
Re: (Score:2)
Likewise with selfmade music or movies. A high-quality
Re: (Score:2)
After it seems clear the rewrite count is going to hell - 5000/cell for 32 nm, 3000/cell for 25 nm, SSDs are going to have a helluva time catching up in cost/GB.
With these reduced process sizes come higher capacities, so the overall erase limit as measured in GB on these devices ends up being nearly the same per unit area (32*32 = 1024, 25*25 = 625 .... 1024/625 is approximately 5000/3000)
Furthermore, the erase limits as measured in GB of the higher capacity SSD's were and continue to be enormous. The (now old) 80GB X25-M drive can sustain over 200GB per day of block erases for over 5 years.. (Intel figures it to be "only" equivalent to 100GB/day in random write
Re: (Score:2)
Heh... I've held that this is probably going to be the case for many applications moving forward for the near to medium future. It should be noted that there's several technologies that're waiting in the wings to "replace" Flash memory and pretty much all of them, if they end up being successful, will render this discussion moot. :-D
Re: (Score:2)
I can't remember the last time I saw a non-geek's laptop or any work laptop with more than 40 or 50 gigs of space used. There's a real opportunity for SSDs to enter the mid-range laptop market and business market with 120-160gig drives.
Price isn't great now, but the performance is great. Once people get to used to an SSD laptop they'll start to hate their mechanical disk based laptop. They'll be asking "why is this so slow to boot up and why is it slower than yours?" Just like they are now used to multi-c
Punny! (Score:5, Interesting)
The "hard disk sector" consolidates, hmm?
For a moment, I did a double take and thought of Stac [wikipedia.org].
Re:Punny! (Score:5, Insightful)
Re: (Score:2)
Well, I thought that the hard disk sector was finally consolidating into a cylinder.
In other News.... (Score:5, Funny)
"...and to commemorate their latest acquisition, Western Digital announces a new line of ultra-green drives...a spokesman had this to say..."
"Yep, these drives are so power-conservative, they actually stop consuming power permanently 30% faster than our previous line. We're calling them 'Hitachies'"
Re: (Score:2)
The tried & trusted will still rule the seriou (Score:3)
'SSDs are going to fail just like hard drives will,' says Chris Bross, Senior Enterprise Recovery engineer at Drivesavers Data Recovery. 'Every storage device will have issues regardless of their underlying technology.'
I do not see SSDs playing a major role anywhere near the traditional large database especially in financial institutions. In our trials with PostgreSQL that had 17 tables, the largest of which had 23.1 million records and 9 columns on an DELL notebook, these drives failed after about week of intense read/writes!
My former boss, who was a closed source stooge blamed the DB. Others like me knew these SSDs were not yet ready for prime-time. By the way all this was about 2 years ago. Technology could have changed for the better now.
Re: (Score:2)
Not really improved. I burned out a REALLY GOOD (best available) SLC SSD in 7 months with a mirrored production workload at a previous jobsite not that long ago.
Poof. All gone.
At the FAST conference, was yet another presentation on SSD lifetime burnout mechanisms, news not actually improving in the slightest so far on life. SLC is not good enough; MLC is toast in write-intensive apps.
Phase-change memory or one of the others, with millions of write cycles per bit, may pull this out, but Flash is not provi
Re: (Score:2)
I think both your boss and you need a refresher course in technology.
Two years ago, Consumer SSD didn't have proper TRIM features. The drive probably didn't die, just needed a wipe and rewrite. And you were probably using consumer grade OCZ drives, and not the better (and way more expensive) commercial drives, which had the better chips in them (and TRIM).
If you were doing Database transactions on SSDs, you'd realize that there is no way for HDD to compete with SSD in IOPS. If you really wanted and needed I
Re: (Score:2)
I've tried to do large database server farm tests on modern enterprise SSDs with TRIM, the best wear load leveling, SLC, etc. They go "poof" at moderate (few months, for my loads) lifetimes.
IOPS x Lifetime / price is a metric I find useful. Unfortunately, it makes SSD look even worse than it does just on a price basis 8-(
Re: (Score:2)
Re: (Score:2)
"My former boss, who was a closed source stooge blamed the DB. Others like me knew these SSDs were not yet ready for prime-time. By the way all this was about 2 years ago. Technology could have changed for the better now."
That is almost certain. First of all, everybody that is serious in the field will tell you that you that that kind of application requires a enterprise (read: SLC flash drive). Chances are that the SSD you've tested with was an older drive with the failed Micron chipset, or maybe even olde
Re: (Score:2)
Enterprise SSD units used to be just lots of DRAM with a big battery and a spinny disk.
The big battery gave you time to flush the RAM to physical disk in the case of a power loss.
Re: (Score:2)
Hitachi Deathstar (Score:2)
I've never been able to get myself to buy Hitachi drives after the deathstar episode.
http://en.wikipedia.org/wiki/Hitachi_Deskstar [wikipedia.org]
Re: (Score:3)
Just about every major drive vendor has had a similar problem at one point or another. Western Digital's original 3 platter 1.6G drives failed in droves, eventually leading them to replace all of them with a 2 platter version for free. More recently, Seagate had a bad problem with their Barracuda 7200.11 model line; are you also going to avoid Seagate?
It happens to almost everybody at some point. Do you not buy Intel products because of the Pentium FDIV or F00F bugs? DeWalt power tools used to be great,
Re: (Score:2)
The only reason that was momentous (no pun intended) was because it was IBM that it happened to. At the time, they were the authority on reliable storage, and it came as a bit of a shock to everyone.
Ripe? I'm not sure I agree. (Score:2)
On a different note:
"But one area that solid-state drives do not improve on their spinning predecessors is in their inevitable movement towards failure."
I would argue that it's actually much worse. It is possible to recover most or all of the data from most hard drives that fail. Try that with the newer SSDs.
Death of the HDD - not yet.... (Score:5, Interesting)
The HDD death has been predicted a few too many times...
Its still the cheapest storage with easy access out there.
Consolidation is not only expected, but somewhat necessary.
I spent 15 years in the HDD industry, and some things to understand:
- It takes roughly 70 people and 6-9 months to design and develop a new disk drive.
- product lifetime has been as short as 2 months and as long as 1 year.
- typical product lifetime is 3-6 months.
- A company needs to have multiple design teams doing multiple product designs phased for phased product releases.
If the product is late, its already obolete, and will not sell.
If the product is slightly behind the times, it will not sell.
Because of the above NRE expenses are huge, so margins or volumes have got to be huge, to make any money.
Margins went to nothing many years back, so the volumes need to be huge. Thus fewer players are the results of all that.
Because of the above, dozens of companies that used to make disk drives are now long gone.
All of that said, the "death of the HDD has been greatly exaggerated"
- its cheap, high volume storage, and all in all "fairly" reliable.
Intel SSD's have low return to manufacturer (Score:2)
If my information is correct, the number of defects for SSD's are about the same as HDD's, with the exception of the Intel SSD's, which cut the number of returns in about 4 (can't find the article using Google, if anybody has a link?). I've returned mine because of a failed firmware update to remove a controller bug. I would not be amazed if the actual number of failed drives is about 8 times lower than HDD. So sure they fail, but I think that the failure rate will be more like that of DRAM than HDDs. And c
Re: (Score:2)
Really the biggest complaint I have with SSD's right now are either mobo/chipset manufactures. Or with the people writing the ACHI drivers, and not checking for compatibility. I ran into a lovely bug with my Vertex series(though it applies to nearly all and randomly) drive, it involves suspend and recovery states and a badly written driver.
When the drive is put into a suspend or sleep state, and you 'wake' the PC up, the drive will randomly lock down the road. And even if it doesn't lock, on a reboot th
Re: (Score:2)
I would not be amazed if the actual number of failed drives is about 8 times lower than HDD.
I would. Who bothers returning/RMAing a failed hard drive if it's one of the cheap models? Or after 2-3 years of use? Hardly anyone. They trash them and get new ones.
Re: (Score:2)
It also has return rates of every computer component, very interesting info.
For the first time, we also integrate SSDs in this article type. Voici les taux de pannes enregistrés par constructeur : The rates of failure recorded by manufacturer:
- Intel 0,59% - Intel 0.59%
- Corsair 2,17% - Corsair 2.17%
- Crucial 2,25% - Crucial 2.25%
- Kingston 2,39% - Kingston 2.39%
- OCZ 2,93% - OCZ 2.93%
To be recorded the VAS had to be made directly through the merchant, which is not always the case since it is possible
Bad news (Score:2)
Bad news, especially for the enterprise users:
- HGST drives quality is above the rest of the industry, it may easily change after the acquisition.
- HGST are often willing to invest in relatively niche products (recent example is 3TB 7.2k drives with SAS interface, no one else makes them). WDC will probably kill any product line that doesn't sell really huge quantities.
Re: (Score:2)
Making SSDs inhouse. (Score:3)
While SSDs and HDDs serve the same function the technologies are pretty different, so it's much easier for Intel and various RAM manufacturers to start making SSDs than it is for WD to transition to them.
Last I checked WD's SSDs were just a rebranded product made by some other company.
I guess Hitachi Global Storage Technologies have all they need to manufacture SSDs in house and I'm assuming the other HDD companies will have to make some acquisitions of their own to stay competitive.
Am I the only one who thought... (Score:2)
Hard Disk Sector Defrags ?
Rescue data from SSD (Score:2)
Yes they will fail, but the failure modes will be different. No heads to crash into the spinning surface damaging the oxide layer and spreading bits of junk across the rest of the surface of the drive.
People will still fail to take adequate backups - so there will still be a market for data recovery from failed SSDs - I wonder what it will look like. Pulling the raw flash chips out of a failed SSD will most likely allow an enormous number of bits to be
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Not even that. If you look at the "facts" section you can see it's a 60W bulb that's currently working at 4W. Which would explain why it's lasting so long and why it looks more like a space heater than a light bulb.
Re: (Score:2)
Re: (Score:2)
Yeah, but it barely glows.
To make a lightbulb that actually produces an useful amount of light you have to get the filament get white hot. And when you do that it starts evaporating, which is why eventually it burns out.
If you tone things down until it glows a dull red and barely produces more light than a candle, then yeah, it'll last forever. But it won't be very useful.
Re: (Score:2)
Of course, that doesn't change your pri
Re: (Score:2)
Ideally, as a HDD maker, you want your hard disks to work indefinitely, and have people buy newer models based on capacity, speed, features, or a combination of the above.
Even without factoring in drive failures, HDDs leave circulation for good for another reason -- data confidentiality. When selling machines, any company that has an interest in security is going to be yanking HDDs from all boxes going out the door and melting/shredding/smashing them to ensure that no data present on those drives ever is r
Re: (Score:2)
Most consumers don't give a crap about who made the disk sitting in their box on their desk, just that it has enough space for justin bieber mp3s and pictures of their children.
Besides, I'm willing to bet with an off the cuff guess given the reliability of a given disk, the reason why consumers find themselves with new disks largely isn't replacement after failure, it's most likely due to a new device purchase. So it really is in the best interest of the drive makers to make sure the drives are durable bef
Wear leveling algorithms and proprietary firmware (Score:3)
Re:Not saying anything new (Score:4, Informative)
You can make a light bulb that never dies, never as described by lifespan of a human being. These can even be made from filament - just have it at a lower power so it glows dim red, instead of bright white. 10-9 tor vacuum would also help.
But if you want real white light "light bulb", you can make light bulb from plasma in a sealed container exited by external electromagnetic field. The light bulb itself is just gas in a hermetically sealed glass container. There is nothing to burn out. The lifespan of the device is the lifespan of the external electrical components, and these can be decades.
Or a LED light bulb. Lasts "forever" if properly designed.
But no one wants to buy a $500 light bulb. People would rather spend $1 every year and replace any broken ones.
Re: (Score:2)
But no one wants to buy a $500 light bulb. People would rather spend $1 every year and replace any broken ones.
I recently replaced a 300W halogen bulb that had basically been running continuously for 13 years. It would get turned off a few times a year due to power outages or for cleaning the lamp, but otherwise ran 24/7.
I have three halogen lamps in my house, and have used a total of 6 bulbs in them over the course of 13 years. Although the other two lamps are not run 24/7, they are used daily. At about $3/bulb, that's a pretty good cost per hour of use, and works out to around 20,000 hours life per bulb.
Re: (Score:2)
How about one that is approaching 110 years and counting ?
http://www.centennialbulb.org/ [centennialbulb.org]
Re: (Score:3)
Do keep in mind, though, those years figures are estimates based on the effective number of hours of runtime before they expect the bulb's phosphor to quit working as well or the start element failing in Fluorescents. It's not a continuous lifespan- it's something based off an assumption that most bulbs will be on for 4-6 hours per day tops.
With 10k hours, that equates to about little over a year of continuous duty (416 days...). If you presume 4 hours per day use, like they tend to, that's an estimated 7
Re: (Score:2)
2. Create a perfect seal.
3. Apply a low amount of power rather than the 70W to 100W standard.
4. Profit.
Much of the advancements in making bulbs brighter than a candle also contributed to their shorter lifespans. Today's highly fragile tungsten filaments for instance are designed for maximum radiation not maximum life.
Re: (Score:2)
Re: (Score:2)
Raid is not a backup, FYI.
I'm having a tough time locating in the GP where he portrayed RAID as a backup solution?
Re:HDDs not going away (Score:4, Interesting)
Raid is not a backup, FYI.
I'm having a tough time locating in the GP where he portrayed RAID as a backup solution?
Duh, If you do backups you don't need Raid
*Ducks*
Re: (Score:2)
*head asplodes*
Re: (Score:2)
Re:HDDs not going away (Score:5, Insightful)
Raid is not a backup, FYI.
Parent said he was using RAID to mitigate failure, not to provide backup. One might use a RAID setup as part of a data backup system, but this was not described by the parent post.
And if you bought them all at the same time from the same place, chances are, when one finally dies from old age, more than one may perish simultaneously.
The likelihood of two devices failing at the same time due to old age is incredibly small, unless by "old age" you mean something like "a meteorite striking the storage system".
Re: (Score:3)
Re: (Score:2)
And if you bought them all at the same time from the same place, chances are, when one finally dies from old age, more than one may perish simultaneously.
The likelihood of two devices failing at the same time due to old age is incredibly small, unless by "old age" you mean something like "a meteorite striking the storage system".
But the likelihood of two devices failing at the same time a few months after installation due to manufacturing defect is high enough that It warrants minor caution.
Re: (Score:2)
The likelihood of two devices failing at the same time due to old age is incredibly small, unless by "old age" you mean something like "a meteorite striking the storage system".
The MTTF between the same batches of drives is usually the same so there's some truth in the parent's post. It's not sudden sympathy failure that kills them thought, it's the thrashing they get when the array is rebuilt that can push a borderline drive over the edge. Google RAID6, and find the papers which outline the reasons behind RAID6's development. There's some great statistics which show these problems.
Re: (Score:2)
I'd be awfully cautious of that. I had two drives mirrored in a RAID1 and thought i was safe. Then, my power supply exploded and fried both drives at the same time (several chips on the circuit boards were literally smoking).
Always a good idea to keep external backups of some kind that are not connected to the PC. Even using an external drive can be damaged.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
SSDs wear out gracefully so that you can still read your data after many failures. Spinning drives just die and you go to a backup.
Most spinning disks, give signs that they are about to give up the ghost. My understanding is that SSDs work perfectly fine one moment and the next they are completely unresponsive.
You're both right. It all depends on how each drive fails. No matter what, any drive can have catastrophic failure without warning.
Re: (Score:2, Informative)
Sold to Toshiba, Oct 2009.
http://www.fujitsu.com/us/services/computing/storage/hdd/
Re: (Score:2)
Who the fuck wrote this anyway?
solid-state drives promise greater power efficiency, performance, and resistance to physical shock; and run more quietly
solid-state drives promise greater power efficiency, performance, and resistance to physical shock, while providing more quiet operation
solid-state drives promise to provide more quiet operation while delivering greater power efficiency, performance, and resistance to physical shock
Re: (Score:2)
That just depends whether you care about cost/size, or cost/IOPS. SSDs are reasonably priced by the latter measure.