How Intel and Micron May Finally Kill the Hard Disk Drive 438
itwbennett writes: For too long, it looked like SSD capacity would always lag well behind hard disk drives, which were pushing into the 6TB and 8TB territory while SSDs were primarily 256GB to 512GB. That seems to be ending. In September, Samsung announced a 3.2TB SSD drive. And during an investor webcast last week, Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Meanwhile, hard drive technology has hit the wall in many ways. They can't really spin the drives faster than 7,200 RPM without increasing heat and the rate of failure. All hard drives have now is the capacity argument; speed is all gone. Oh, and price. We'll have to wait and see on that.
Price not yet announced (Score:2)
Re: (Score:2)
Re: (Score:2)
Cons:
- Samsung recommends turning off indexing for reliability. Doing so means that you can no longer search for files from the "Search programs or files"
Eh?
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
The price per GB on SSDs has been below $0.50 for some time now.
Re: (Score:2)
That would put this thing at $1,600, vs an HDD with twice the capacity at $300.
Until they get below $0.1/GB, HDDs will have a very, very safe hold on the "capacity" side of the market.
Empty article.. (Score:5, Informative)
I don't know why Intel and Micron get any special consideration given that right in the summary the fact that Samsung has already announced the same move.
Also incorrect assertion that drives don't go faster than 7200 (there are 15k drives, just they are pointless for most with SSD caching strategies available).
Re: (Score:2)
From the summary, "They can't really spin the drives faster than 7,200 RPM without increasing heat and the rate of failure. "
I don't see an assertion in the summary, or the article, that drives are physically limited to 7200 rpm. You couldn't finish the sentence before replying?
Re: (Score:3)
Re: (Score:2)
You fail at reading comprehension. There is nothing in the summary that says those drives aren't possible, just that they have increased heat and (therefore) an increased rate of failure. This is one reason why average hard drive speeds haven't improved much in the last 15 years.
I once bought six Western Digital 10000 RPM drives for a RAID setup. Three of them failed within the first year. Two failed the next year (including one of the warranty replacements). I replaced them all with six of their 7200
Re: (Score:3)
How loud are the fans in those servers? Too loud to put into a home computer, right? Well, the 15K drives are part of what the obnoxious fans in a typical server case are cooling. Even in a server environment where you can handle the noise of ventilating that heat, people can still worry about the total heat production of a server rack. It doesn't help that the 15K drives are normally smaller too, physically and in capacity, which means you need more of them to reach the same total storage.
Re: (Score:2)
Also incorrect assertion that drives don't go faster than 7200 (there are 15k drives, just they are pointless for most with SSD caching strategies available).
That isn't what was asserted.
They asserted there's no REAL market for 10K/15K hard drives, as the performance increase isn't helpful, the cost to manufacture and test skyrockets, and the additional physical and thermal stresses shorten the drive's lifespan and make them unsuitable for some applications (laptops).
Re: (Score:3)
Re: (Score:2)
And the point is that SSDs fill that market need so much better (the fast disks aren't much cheaper, if at all, and they suffer decreased reliability) that there's no point to them.
Intel & Micron (Score:3)
Also, didn't Intel exit the flash market a while back, spinning off its flash division along with ST Micro to Numonyx, which later got acquired by Micron? I thought that the whole idea then was that memory was so unprofitable that it wasn't worth keeping it as an albatross on corporate margins.
Also, memory fabs are different from the ones used for making processors/controllers - it's not like fabs that don't make more Atoms or Celerons will be repurposed for SSDs. So how does it make sense for Intel to
Re:Intel & Micron (Score:4, Informative)
The Numonyx venture was specifically for NOR flash manufacturing.
IMFT (Intel Micron Flash Technologies) is the NAND partnership between Intel and Micron.
Re: (Score:3)
Also incorrect assertion that drives don't go faster than 7200
Also premise from SSD article spindle speed is a limiting factor is a bogus oversimplification.
Density increases have always translated to correspondingly higher I/O rates at same rotational speeds.
Re:Empty article.. (Score:4, Interesting)
With Enterprise SSD drive prices hitting $1/GB (granted some are still $2-3/GB), the days of 15k RPM drives are definitely numbered. You get 50-100x the IOPS out of SSDs compared to the 15k RPM SAS drives. That means for a given level of IOPS that you need, you can use a lot fewer drives by switching to SSDs.
I'd argue that if you are short-stroking your 15k SAS drives to get increased IOPS out of the array, it's past time to switch to enterprise SSDs.
About that Intel 3D NAND... (Score:2)
According to Techreport, Intel's three-dimensional NAND. [techreport.com] will enable 10TB flash drives in servers in 2 years
Spinning storage is king... (Score:2)
...as long as high capacity SSDs keep costing as much as an entire computer.
Re: (Score:2)
...as long as high capacity SSDs keep costing as much as an entire computer.
Hard Drive [newegg.com]: $429
Whole Computer [newegg.com]: $400 or less.
Re:Spinning storage is king... (Score:4, Funny)
Hard Drive [newegg.com]: $429
Whole Computer [newegg.com]: $400 or less.
Ah, yes. Confucius say, the path to mastering pedanticism is paved with low UIDs.
Re: (Score:2)
For $150 you can get a SSD with plenty of space for the vast majority of desktop roles, and it will beat out HDD by at least 10 times or more on speed, heat, power consumption, and noise.
The only thing HDD is king of, is being slow. King of maybe certain roles that require a very large amount of cheap space.
And if you want to argue capacity needs for servers, you need to start talking about enterprise HDDs, and their $/gb is not as high as consumer HDDs, and it gets worse when you factor in power consumpti
Wait? For how long? (Score:3, Insightful)
We'll have to wait and see on that.
What's wrong with you people. We are waiting already for 5+ MORE THAN FIVE fucking years. Still hasn't happened.
1TB HDD - 60-80€, 1TB SSD - >350€.
The problem is that once PC is turned on, there is not much use for the SSD speed. It's not like I'm moving terabytes of data around everyday. And even if I have to, I do not have to wait for it: I simply leave it overnight.
Another problem is that (some) SSD have the nasty habit, once failed, to deny you access to the data at all. I hoped that at least those jackasses would straighten out the SMART support and finally standardize the monitoring parameters. But few moronic manufacturers even proclaimed that their drives are so good that they don't need no stinking SMART support...
All in all, SSDs are developing too fast. And have pretty bad history of firmware bugs. And literally all manufacturers, instead of strengthening their stance of data safety, all like one doubled down on the "oh but look how fast it is!"
P.S. And TRIM support is still in shambles. After all the years, some drives still require a proprietary application/driver installed.
Re:Wait? For how long? (Score:5, Informative)
The problem is that once PC is turned on, there is not much use for the SSD speed.
Ever tried loading the next level in a game? SSDs make a big difference.
And, you've completely forgotten all the other uses (both enterprise and personal) like database, video editing, running VMs, etc.
Hybrids are where it's at (for me) (Score:3)
I've been using Seagate's hybrids for a couple years and the combination of performance, simplicity, and economy hit the spot. I have 750 gig and 1tb drives in my laptops and a 2tb in my gaming rig. The hybrid drives were a small price bump for a big performance bump. Sure, gigantic SSDs would give me a slight performance boost but it's a big jump in price for a small jump in performance over hybrid.
Re: (Score:3)
Yes, 1T hybrids are the sweet spot right now.
Capacity and price (Score:2)
Two out of three ain't bad.
For all the reliability worriers (Score:3)
600TB total writes - http://techreport.com/review/2... [techreport.com]
800TB total writes, and some of these consumer grade drives start to fail - http://techreport.com/review/2... [techreport.com]
"By far the most telling takeaway thus far is the fact that all the drives have endured 600TB of writes without dying. That's an awful lot of data—well over 300GB per day for five years—and far more than typical PC users are ever likely to write to their drives. Even the most demanding power users would have a hard time pushing the endurance limits of these SSDs."
By contrast, my main home machine (120GB Kingston SSD) has ~7GB total, in over 2 years of 24/7 use. I'll leave you to do the math on lifespan for that.
Re:What about long-term data integrity? (Score:5, Informative)
Well, the Samsung 3.2 TB drive claims that you can read/write the entire drive every day for five years before failure. It's my understanding that at one point, SSDs were notorious for gradually declining over time, but that today's generation of SSDs basically has reliability out the wazoo. I can't quote you stats on it, but anecdotally, I've had a couple of SSDs in my computer for several years now, I leave it on 24x7, and I've never had a problem.
...Yet. YMMV.
Re:What about long-term data integrity? (Score:5, Insightful)
Re:What about long-term data integrity? (Score:5, Informative)
The problem is the way how flash itself works, and how smart your controller is. Unlike a disk, flash must be erased before writing. And here is where the problem comes: flash data is stored in a page of cells, with typically 8 pages of data per "block". Erasing happens on the block level. So in order to erase a single page of data, you need to erase all 8 pages in a block. Since you need to keep the data of the other 7, you first need to copy that data into another block, erase the original one, write all data back and erase your "tmp" block. The churn on blocks happens a lot faster than what you'd think.
Having that said, for consumer products, MLC or TLC is perfectly fine. For enterprise, not so much.
You'll see that in the price, obviously. TLC is the cheapest, followed by MLC, and the most expensive technology is SLC.
Re:What about long-term data integrity? (Score:5, Informative)
That's actually what they do.
1) Select an empty block.
2) Copy the data into ram on the device
3) Write the new physical block
4) Update the virtual/physical block map
5) Mark the old block as empty
Re: (Score:2)
If that's the case, then why are they not copying the data to ram contained on the drive itself? Seems like an awful waste of cycles with a relatively simple fix. Is it just a cost issue?
Cost and reliability/latency. If you copy it to RAM and get a power outage, data is gone. So that will ruin your reliability. Which means that you have no choice but to ack the write after it's written to the actual block itself. Which in turn increases the latency between receiving the data and ack'ing it.
Re: (Score:3)
Re:What about long-term data integrity? (Score:5, Informative)
you first need to copy that data into another block, erase the original one, write all data back and erase your "tmp" block. The churn on blocks happens a lot faster than what you'd think.
If that's the case, then why are they not copying the data to ram contained on the drive itself? Seems like an awful waste of cycles with a relatively simple fix. Is it just a cost issue?
Any wear levelling worth its salt will not do what the grandparent wrote. You simply do not change one page in a block. If you write a single page, that is handled by mapping that page to another (free) block and maintaining a mapping table for which LBAs are currently stored in what blocks. However, if you are doing single-sector writes, or in turn repeated I/O flushes of the same sector, you still see a lot of write amplification. To keep data integrity, the mapping tables also need to be kept updated in a correct way (or at least uniquely recoverable by scanning through all blocks after a hard power off).
Re: (Score:3)
Any wear levelling worth its salt will not do what the grandparent wrote.
Yes, which is where the smarter controllers come in, and where you have the process of "garbage collection". There was a piece a while ago on TRIM not being support on some Apple gear, if I'm not mistaken.
Re:What about long-term data integrity? (Score:4, Interesting)
More accurately, recent versions of OSX have their use of TRIM commands limited to the 'apple endorsed' models of SSD, the ones the machine ships with. There's some dispute over the reasons for this. One faction claims it's Apple trying to sabotage upgrades, making it so that if you buy an after-market SSD rather than paying their insane markup performance will become awful. Another faction claims it isn't deliberate sabotage, but rather a lack of interest in testing for unsupported hardware configurations: TRIM can potentially malfunction horrible if the SSD doesn't impliment it in quite the expected way, and Apple has only coded and tested it for their preferred models. By disabling it on third-party hardware they remove the need to test for fifty-odd different devices to make sure it isn't going to corrupt data.
Re: (Score:3)
Yes and no - they simply don't QA every drive that ever existed or will exist, because they didn't ship them and it would be ridiculous to do that anyway. Where the change was, is that they implemented code signing on kernel extensions in order to beef up security a bit, and the side effect is that the ugly binary patch people were applying to the AHCI kext is quite broken. If anyone out there was patching any of the other 200+ kexts that ship with OS X, they have a similar problem; unless they turn off
Re:What about long-term data integrity? (Score:5, Informative)
The actual wear leveling algorithms are proprietary, but rest assured that they do not use flash as temporary memory, and neither do they read an erase block, change one sector and write the erase block back. One thing flash controllers do is maintain a list of unused sectors. So, if you write to one sector, the data goes into an empty sector of a different erase block and the controller remembers that the sector's old location is now unused (and where the sector is now). That's where the TRIM command helps: It marks sectors as unused without using up a different sector somewhere else. When the drive needs more free erase blocks, it copies the remaining data from mostly "abandoned" erase blocks and flashes (erases) the old erase block. All that and more brings down the write amplification, which measures the average number of sectors actually written for each write to a sector. Intel claims a write amplification of just 1.1 for one of its controllers. Also, wear leveling makes sure that erase blocks are used evenly. Otherwise writing the same few sectors over and over again would burn out the drive in seconds. All in all you can expect to write at least a few hundred times the capacity of the drive, in any order and to any sectors you want, before you need to worry about flash cell wear.
Re: (Score:3)
You might as well ask the same question about a hard drive. If you power down a hard drive and put it on a shelf for a year, there is a better than even change that it will be dead when you try to power it up again, and an even higher chance that it will die within a few days.
A powered-down SSD that has been written once should be able to retain data for ~10 years or so. Longer if kept in a cool place. As wear builds up, the retention time drops. You can look up the flash chip specs to get a more precis
Re: (Score:3)
Thunderbolt is external PCIe, and is thus nice for overpriced laptops and trash can shaped workstations. Else it is a lot cheaper to use your internal PCIe. PCIe 2.0 4x will do the same job as Thunderbolt 2.0 - and so you can either use a PCIe 2.0 4x card for your SSD (or a 3.0 one to yet double the bandwith, seems the 3.2TB Samsung SSD uses that) or for a 10Gb NIC.
Real issue with 10Gb ethernet is the cost and then the power use, not anywhere near a graphics card but around the 10W mark which is significant
Re:What about long-term data integrity? (Score:5, Interesting)
Well, the Samsung 3.2 TB drive claims that you can read/write the entire drive every day for five years before failure.
Such statistics are meaningless in my book. Light bulb manufacturers claim their bulbs will last five years or seven years but when you look at the fine print they say that's given under the idea you're turning the light on, leaving it running for 3 hours, and turning it off once per day -- nobody uses light bulbs like that.
Re: (Score:3)
If you actually use a kindle, you will realize that the 4 week claim is quite true. I do not know the fine print etc but I have been using one for several years and it still gives me a nice 3-4 weeks battery. I read about 2-3 hours daily.
Re: (Score:2, Informative)
Yes, but they employ alot of techniques to mitigate this. The endurance is so high that unless you are recording audio/video almost constantly, it will usually not become an issue. There's plenty of literature on it so not going to be redundant.
Re: (Score:3, Insightful)
RAID doesn't protect against loss of data, that's what backup is for. RAID protects against loss of uptime.
Re: (Score:3)
All RAID levels protect against loss of data due to failure of individual drive(s), port(s), or data cable(s).
RAID 0 is not RAID.
RAID is not backup.
Re:What about long-term data integrity? (Score:4, Insightful)
You're both right. RAID can decrease the chances of data loss due to some kinds of problems, but ultimately it shouldn't be considered a reliable protection against data loss. A RAID can be lost or corrupted, or someone can overwrite or delete a file. If you want to assess the risk to your data and talk about the set of data that is protected against loss, you should only consider your backed up data to be "protected". The protection that RAID offers is too weak to be considered to be significant protection.
Therefore, the fundamental purpose of a RAID is to prevent the downtime due to failure of an individual hard drive. If you did not have RAID, then your data volume would stop running, and you'd have to be offline while you repair the device and restore from backups, so that's what you're successfully preventing. All the data that has been backed up (assuming your backup is good) should be safe, and any data that has not backed up is not safe, regardless of whether you have a RAID.
RAID is redundancy, not backup.
Re: (Score:3)
Re:What about long-term data integrity? (Score:5, Funny)
Risky Array of Imminent Disaster.
Re: (Score:3, Funny)
It doesn't matter what RAID level you use, rm -rf / will still dutifully delete all of your data for you.
Repeat after me, RAID is not a backup.
Re: (Score:2, Insightful)
Backups aren't the discussion here, data loss due to drive failure is.
Context means everything, and you can't use boring thoughtless mantras to answer every question.
Re:What about long-term data integrity? (Score:5, Informative)
No, it doesn't. It doesn't protect you against losing data in a fire, it doesn't protect you against losing data to malware, and it doesn't protect you against losing data to making a mistake. All changes are automatically propagated across all disks. Backup protects you against losing data.
What RAID 15 does is protects you against losing a day of work because one disk failed - that is, it protects against loss of uptime.
Re: (Score:3)
In theory yes, in practice it's unlikely to ever come up. Wear leveling does wonders, over provisioning does more on top.
If SSDs had come first you'd be saying the same thing about HDDs: Don't HDDs have fragile mechanical parts that fail randomly?
Re: (Score:2)
It really depends what you are going to use it for. If it's your desktop PC, consumer grade drives are fine. If you are going to use the SSDs for scratch storage on a supercomputer or the journal devices for Ceph, you probably are going to want high write endurance drives.
Compression (Score:2)
Wear leveling does wonders, over provisioning does more on top.
Add compression on top of that. If your data isn't all ZIP, PNG, JPG, MPG, or some other compressed format, the controller turns repetition into even more over-provisioning.
Re: (Score:2)
I believe what they do is spread the data all over the memory, to mitigate the issue of a small part of the memory being heavily bombarded w/ writes while 90% of it never gets touched. I'd imagine that Copy-on-Write filesystems, such as ZFS, would enable one to do it more effectively, since no actual data ever gets deleted, and only the metadata info is changed, and the changed data is written to another portion of disk. If this is done effectively, then the disk utilization is increased, and endurance i
Re:What about long-term data integrity? (Score:5, Interesting)
Re: (Score:3)
Correct me if I'm wrong, but don't SSD's have a point where they put on too many write's per bit?
Tech Report [techreport.com]checked a bunch of SSDs for write durability and virtually all of them made it to 600 terabytes of data writes or better.
For an ordinary desktop user, write durability is not a problem. Now what about storage durability? With 3 bits per cell, how long before the data fades?
Re:What about long-term data integrity? (Score:4, Informative)
With 3 bits per cell, how long before the data fades?
This is the reliability issue that nobody wants to talk about. I am sure that many others are like myself, with a closet full of old PCs. I like the idea that if I were to pull one out and power it on after having sat unplugged for a span of years, it would still boot (CMOS battery BIOS issues not withstanding) and would still have all of the data I left it with.
SSDs on the other hand won't even guarantee that your data will still be there after *only one year* of being powered off, and as we've dipped below the 34nm process, sometimes SSDs are warranted for even less.
Re: (Score:2)
Typically, the endurance of any non-volatile memory (read flash/hard drives) is measured per sector/block, where the latter is the smallest number of erasable bytes/words/quad-words that an erase operation can erase. Typically, for flash, that number is 1-10 thousand cycles. That number is eroded as one increases the number of bits per cell.
Like I mention below in a response to the GP, if you have it so that every byte is written only once and any overwrites happen to other bytes/sectors, you can avoid
Re: (Score:3)
That said, the rest of the storage array is spinning rust for cost reasons. When you are talkin
Re: (Score:2)
Re: (Score:2)
That's Adorable.
Re: (Score:3)
Actually, one of the nice things about SSDs is that as capacity increases, reliability increases too. More cells means more options for wear levelling, means more life span.
Re: (Score:2)
Not if you fill that space up. The excess capacity used for wear leveling only works if there is non reported space used for such or you don't fill the drive all the way up. The minute you fill the drive you are using the flash to it's extent. And putting things like page files on them will increase the wear rate significantly. A smart installation puts the page file/swap space on a magnetic disk and uses the SSD for everything else that isn't doing caching where there are heavy writes.
Until they can solve
Re:Reliability (Score:4, Informative)
Most manufactures leave any number of gigabytes of flash unmappable for filesystems, that way you can never fill up the drive, even if you fill up the file system. Most pro/enterprise versions of the drive just leave a larger area unmapped.
Re: (Score:3)
Depends on the application. For a workstation or build box, we configure swap on the SSD.
The point is not that the build box needs to swap, not with 32G or more ram, but that having swap in the mix allows you to make full use of your cpu resources because you can scale the build up to the point where the 'peaks' of the build tend to eat just a tad more ram resources than you have ram for (and thus page), which is fine because the rest of the build winds up being able to better-utilize the ram and cpu that
Re: (Score:3)
And 15 years ago I bought a massive 2GB drive for $350.
Computer stuff gets cheaper over time. There's no reason the same won't be true for SSDs. At some point SSDs will be cheap enough that even if HDD are still 1/100th of the price, SSDs will still win because of all their other advantages.
Re: (Score:2)
And 15 years ago I bought a massive 2GB drive for $350.
Then you got ripped off. Seagate sold 28 GB HDDs for $350 in 2000.
Re:LOL (Score:4, Informative)
My first computer stored data on audio tape! I don't know what their capacity was, but I remember my father borrowing games from work to run through a dual-cassette deck. Some of them were copies of copies, and you had to fiddle with the treble knob to get them to read.
I don't think we're beating that unless someone here is old enough to have used core memory or fluid delay lines.
Re: (Score:3)
I don't know what their capacity was...
Well a C90 tape had a 90 minute length and, depending on you computer the data was written at 1200 baud (BBC Model B) to ~1500 baud for a ZX spectrum. Unfortunately there was some overhead so lets say this was 20% (guesstimate). This would give a tape capacity of 90x60x(1200/8)x0.8=648000 bytes or ~633 kB. Some people used to use C120s which would get you an extra 33% but those tapes were thinner and more likely to break or suffer degradation in sound quality which meant you lost your program. With a Spect
Re: (Score:2)
And 15 years ago I bought a massive 2GB drive for $350.
4 years ago I bought a few 2TB drives for $69 each.. Then the prices sky rocketed and still haven't come back down to that price.
Re: (Score:2)
Newegg has 2TB HDDs refurbished for $69. New ones at $79 or higher. Even the 3TB for $105 they sell is actually about the same $/GB as those $69 2TB HDDs.
Re: (Score:2)
Re: (Score:2)
To add to my previous post, going even farther back Seagate in 1998 sold a 6.4 GB HDD for $350. If you paid $350 for only 2 GB 15 years ago, you got royally boned.
Re: (Score:2)
And 15 years ago I bought a massive 2GB drive for $350.
Computer stuff gets cheaper over time. There's no reason the same won't be true for SSDs. At some point SSDs will be cheap enough that even if HDD are still 1/100th of the price, SSDs will still win because of all their other advantages.
I agree, eventually SSDs will become cheap enough that it won't be worth it to manufacture spinning hard-drives anymore. It's kinda like Plasma TVs today. They are being dropped by TV manufacturers because it's cheaper to scale up LED TVs.
That being said, it's not going to happen overnight. The drive manufacturers need to make their R&D money back, at the very least....
Re: (Score:2)
Companies still make tape drives. There is no reason to beleive HDDs are going anywhere anytime soon. People have been wrongfully proclaiming the death of the HDDs for most of this decade.
Re: (Score:2)
To be fair, SSDs seem to be progressing much faster than HDDs. If HDDs continue their sluggish capacity growth, SSDs will pass them in cost per terabyte in a few years.
Re: (Score:2)
I agree, eventually SSDs will become cheap enough that it won't be worth it to manufacture spinning hard-drives anymore.
Capacity per dollar. Home use, 2TB is fine. But in business, arrays of 50TB are common, and size will only grow. Eventually spinning rust drives will become the near-line storage we used to have when tape and laser disks actually had large capacities.
Re: (Score:3)
Seems to me that I bought my first 85 MB HDD for about that much 30 or so years ago....
Sure, but speed... (Score:2)
6TB drives can be had for $250-300
That's really nice I agree, I have a few 4TB drives that I use for photography... but I would without hesitation pay 4x the price for the speed of an SSD (especially one not bound by SATA speeds, like the Samsung PCiE SSD...)
Re: (Score:2)
Even the cheapest SSDs are 10x the price. No way these new ones are going to be 2.5x cheaper.
Re: (Score:2)
So you would pay $1200 for a hard drive "without hesitation"?
REALLY?
I find it hard to buy a mere 1G SSD and it's not quite as expensive as the thing you seem eager to treat as chump change.
Re:LOL (Score:4, Insightful)
SSDs will likely get there in 3-5 years by Moore's law. The question is where hard drives will be by then.
Re: (Score:3)
6TB for $300 is $50 per terabyte, while current pricing is around $400 per terabyte. That's a factor of 8, not 16. I based my math on 18 month doubling, but that's for performance rather than density, so I was admittedly off. Still, that should take you to roughly 3 * 24 = 6 years, not far off my original figures.
In terms of the applicability of Moore's Law to SSD pricing, prices for SSDs have been dropping far faster than Moore's law since the first practical SSDs hit the market. My first consumer SSD was
Re: (Score:2)
Okay, 3 years ago, a 256GB SSD cost $900.
Today, you can get them for $100-200.
Re: (Score:2)
The writer is simply ignoring cost as an inconvenient fact of SSD adoption rates.
Re:LOL (Score:4, Informative)
You said: "The article writer mist be smoking some amazing shit to come to such a wacky claim."
Are you referring to the article summary, or one of the specificly linked articles? Because summary says: "Oh, and price. We'll have to wait and see on that."
So they are not making any claims about price. It seems maybe you are the one smoking too much?
Anyhow, there are only a few niche roles where a desktop needs that much space. Give me a 240 GB SSD with 10 times faster IOPs, 10th of the heat and power consumption, zero noise, and no moving parts. That's plenty.
HDD's still have there place for certain use cases, but SSDs beat them by an order of magnitude on just about every factor except price per gigabyte. $/gb is not as relevant when you realize $150 will get you enough of space on an SSD for most desktop roles, and way more than you need on an HDD.
Re: (Score:2)
There is storage, and then there is storage. 6TB drives are nice and all, and good for things like Pictures and Movies, where you don't need access speed. The moment you have databases, which need things like IOPS then size doesn't matter nearly as much. When you see the difference in IOPS between Spindle Drives and SSDs, you'll start to realize there is more to storage than size.
Your high end 15K spindle drive can do about 150 IOPS The only way to increase this is to go to RAID. A good High End Spindle RAI
Re: (Score:2)
Comparing a $10 USB stick with an SSD is like comparing turtles to cheetahs. Those USB sticks might write at 2-5 megs/sec. Maybe. 1/100 the speed of a good SSD. It's not a cromulent comparison.
Re:LOL (Score:5, Funny)
Comparing a $10 USB stick with an SSD is like comparing turtles to cheetahs. Those USB sticks might write at 2-5 megs/sec. Maybe. 1/100 the speed of a good SSD. It's not a cromulent comparison.
That may well be, but have you ever looked at a turtle's drag coefficient?
Re: (Score:2)
Usually USB flash drives have really poor write endurance and reliability in general compared to proper SSDs.
Re: (Score:2)
Sandforce controllers also do encryption, and certain controllers with certain operating systems can leverage this to integrate the controller-level encryption with the OS-level encryption, at which point the drive compression is done on the raw data before encryption happens.
Re: (Score:2)
It is more than just Price per TB. It is speed (IOPS). What good is all the storage space in the world, if it is slow? If it was just Price per TB, Tapes are even higher density/price, however the IOPS are so slow.
If you read the article on the Samsung SSD, you'll realize why they put it on a bus that wasn't SATA III (Not fast enough)
Size, Speed, Price, pick any two.
Re: (Score:2)
Re: (Score:2)
You can only get 15K rpm drives at most at 600GB in size. At about $280 for those disks ($0.47/GB), SSDs are a complete win.
You can get a very solid 1TB SSD for $450, which is cheaper per GB ($0.45) and much, much faster. You can get a serious enterprise 1TB SSD (10-year warranty) for $550. You may have racks of 15K rpm drives, but they are truly outdated dinosaurs: slow and plodding, and expensive to feed and care for.
Re: (Score:2)
Would you buy those 15k's new today? What usage pattern would favor 15k's vs ssd's? Space is similar if not in favor of the ssd's. IOPS SSD's win hands down. Price really depends on how much vendor gouging is going on, but if you need enterprise storage you tend to need IOP's so far fewer SSD's can do the same job as a lot of 15k spindles.
Sure enterprise bulk or near line enterprise 7200's give you a ton of space.
Re:Really? (Score:4, Insightful)
It's not disingenuous at all. It merely demonstrates the primary problem here, namely the price gap. Larger SSD drives are low capacity and expensive. They are priced outside the range of most consumers while also being inferior in terms of bulk storage. A larger SSD is less able to justify it's price premium than a larger HDD.
Even if SSD prices get less ridiculous, chances are that HDD prices/capacity will keep pace and continue to keep HDDs relevant.