SSD Prices On Parity With High-End HDD By 2011 106
kgagne writes "EMC executives were heavily pitching the virtues of solid state disk drives at their annual users conference in Las Vegas, saying that SSD will not only be on price parity with high-end Fibre Channel disk drives by the end of 2010 or early 2011, but that NAND memory will solve all sorts of read/write issues created by spinning disk technology. EMC's CEO and its storage platforms chief said the company will do everything it can to drive SSD prices down, and adoption up, by deploying them in their products. One issue might be that EMC is using SSD from STEC, which is being sued by Seagate for patent infringement." The article also mentions some of the work EMC has been doing to make sure SSD is enterprise-class reliable, such as developing "wear leveling" software.
SSD from STEC (Score:5, Insightful)
Why is this an issue? If EMC think the technology is a winner, and they don't have a stake in a particular player (of course they have to choose a supplier, but that hardly indicates a long term commitment) then what do they care who wins?
One of the great things about being in EMCs shoes is that you want these things commoditized.
Either way, as a the sooner SSD is directly competitive the better. They're ICs - you churn them out, and only worry about yield. HDDs are mechanical and will always have their mechanical shortcomings.
--Q
Re: (Score:2, Insightful)
it would be good to have some competition anyway to drive the prices down..
still. I cant wait for 100gb SSD drives.. finally a laptop for gigging that can handle a beating.
really once these are standard in laptops I think you will see more robust laptops on the market since the spinning disks have always been one of the quickest parts to fail (well assuming that the laptop has decent cooling design
Re: (Score:2)
Re: (Score:2)
Overlords (Score:1, Interesting)
My laptop and server already run off SSD and with any decent bit of wear-leveling it is near impossible to wear out a SSD.
Re: (Score:1, Funny)
Re: (Score:1)
Re: (Score:2)
I'd be more worried about the controller chip. I've had a few USB thumbdrives go bad on me this way, rendering the data on them inaccessible unless you're really good at soldering. As a matter of fact, I've even had more USB thumbdrives go bad on me the past few years than harddrives, though admittedly I'm not carrying harddrives around in my pocket so it isn't fair comparison.
But when (Score:2)
Re:But when (Score:4, Insightful)
In a few years. Right now SSDs perform incredibly in terms of IOPS (I/O operations Per Second) that enterprise storage type folks are eying them longingly. They just need a little bit more space for the money. Until such time, it's very possible that we'll see more in terms of using SSDs as caching components in front of more antiquated spinning media.
Re: (Score:2)
I mean, even in the old days they could choose between punch cards, tape, and teletype for data storage and retrieval.
Re: (Score:2)
Re: (Score:2)
C//
Re: (Score:2)
Re: (Score:2)
will they be competitive with mid range priced hard drives? You can get 500GB for $100 these days.
The other thing I am curious to know is when we are likely to get SSDs with similar read/write performance to current mechanical HDs.
Re: (Score:2)
Re: (Score:1)
The Future is Solid State (Score:3, Interesting)
However, SSD is the future wave, as it Just Works better than platter drives. A high quality, high density, low priced SSD would knock the socks off any platter drives today if it were available. Platter drives will be the mainstream market for a while because of cost and size availability. However as SSDs become cheaper and hold more space, the WILL push platter drives out.
Re: (Score:2, Interesting)
Longevity (Score:4, Insightful)
Platter drives are here to stay for a while. Once SSDs get the bugs worked out and the price drops to current platter drive levels, there will be a large migration.
Re: (Score:3, Insightful)
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Re: (Score:3, Interesting)
Seems less likley (Score:2)
If you undeleted a file on a system that was managing wear leveling behind the scenes, doesn't it seem more likley that area would be allowed to "cool down" for a while since it just had a file in it?
With a normal filesystem it's more random.
However what
Re: (Score:2)
Re: (Score:2, Interesting)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1, Insightful)
The reality of the matter is that solid state is simply more reliable** than mechanical electronics in most cases.
*IBM Deskstar
** reliable is relative to the usage.
Re: (Score:1, Funny)
Re: (Score:1)
Re: (Score:2)
If you write any sort of research paper, you can't go and write, "and I'll leave it as an exercise to the reader to confirm all my supposed 'facts.'" There is an accepted procedure that the burden of proof of claiming anything lies on he who claims it, at least in professional and academic circles.
Re: (Score:2)
C//
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
High end SAS drives claim 1,200,000 hours for MTBF.
Western Digital claims 600,000 hours for MTBF.
There have been studies from Google and Carnegie Mellon both that suggest that hard drive makers greatly exaggerate, and that drives fail as much as 15X more often than the manufacturers suggest.
SSD makers and spinning disk makers are not the same industry.
One cannot know whether or not the SSD makers "lie the same" as the disk makers.
C//
Re: (Score:3, Informative)
Sure you can, they're Human of course they lie. MTBF can be generated, based on how long it takes for 'more than 50% of the data sectors fail' but who would keep using a disc that kept having randomly failing disc sectors, even if SMART technology can reduce the risk of loosing data...
so of course SSD makers are going to calculate a MTBF assuming the type of wear a typical person who boots vista once, for every time it crashe
Re: (Score:2)
Sure you can, they're Human of course they lie.
I have no doubt, but we cannot know if they lie the same. What I mean is that just because hard drives seem to fail 15X more often than manufacturers claim, according to some third party study, doesn't mean the SSD drives will fail 15X more often than the SSD makers claims.
These are different industries, and different circles of practice and so forth.
C//
Re: (Score:2)
where the 'statistical rate of partial failure' falls, has nothing to do with how they measure their MTBF tests.
Re: (Score:2)
I personally have no idea how they are gerrymandering their MTBF, hence my comment. The advantage of this particular statement of mine is that it has a 100% chance of being correct. *wink*
C//
Re: (Score:1)
I personally have no idea how they are gerrymandering their MTBF, hence my comment. The advantage of this particular statement of mine is that it has a 100% chance of being correct. *wink*
C//
I was fairly sure they were abusing SMARTs capabilities to keep a drive running after sectors began failing. if you don't count the drive as dead, til it refuses to spin any longer... then you get a much better MTBF even though nobody uses a disk with 75% of the disk as bad sectors.
obviously only an insider would know Exactly how they do the MTBF numbers, but if google calls a disk failed when one sector fails, and seagate calls it a failure when with 90% of the sectors failed, it stops rotating... then yo
Re: (Score:2)
Let me help you out, however. How many drives exactly, do you think Seagate tests for 1,200,000 hours before going to market with a drive series? Exactly zero.
The difference between Seagate's data and the Google and Carnegie Mellon data, I would say is that th
Re: (Score:1)
math is my worst subject, i'm a lot better with history, see, in history people repeat the same mistakes over and over, in math they want you to get every question right... that's a bit harder...
i'm good with pattern recognition, weaker in other areas.
Re: (Score:2)
I've lost a Travelstar laptop drive in a similar way - another family member helpfully moved the laptop into direct sunlight during a bright Summer day, and left the laptop lid down. The poor hard disk drive overheated and fried out (gave a whining noise until the laptop powered down).
The only people who could really make any fair comparisons would be search engine/internet archive
Re: (Score:2)
Re: (Score:1)
Re: (Score:2, Informative)
Re: (Score:2)
How it plays out depends on how the hard disk manufacturers deal with it; personally I'd prefer they play their strength and simply forget about speed and concentrate on their forte; storing lots of bits.
I already have all the fast storage I need (if I wanted more I could stripe over more disks). Bulk storage, however, is something I'm permanently short of; I could easily use up to the petabyte rang
Re: (Score:3, Insightful)
Sure currently buying a 1TB Solid State drive would be too expensive, but do we need really it?
No: on my HDD, I have two partitions: one of 30 GB for the OS and the software (which has still a lot of free space), and a big one for the data.
Replacing the OS&software partition with a SSD would bring 99% percent of the benefits of having a 'full' SSD: fast boot time, fast application startup, etc.. Especially as we can use a part of the SSD as a cache for the HDD.
So IMHO, we don't r
Re: (Score:2)
What a crock.... (Score:2)
Also, SSDs, if they have a lower MTBF will enable EMC to cut costs by having fewer CEs out there replacing drives.
What about filesystems... (Score:3, Interesting)
Or... does solid state storage take care of those oddities in firmware with the whole automatic write leveling technology?
Re: (Score:3, Interesting)
The only adaptation I can see is trying to minimize wearing on certain blocks, but from the looks of it the SDD's are being designed with wear leveling in mind
Re: (Score:1, Insightful)
The only adaptation I can see is trying to minimize wearing on certain blocks, but from the looks of it the SDD's are being designed with wear leveling in mind so I doubt even that will matter to the software.
Actually with proper software you'd probably like to do the opposite - try to wear out certain blocks as fast as possible. This way the lossage is more predictable and rest of the disk is kept in a better shape. Point being that bad sectors aren't really a big deal if you're prepared for those.
Re: (Score:3, Informative)
The only adaptation I can see is trying to minimize wearing on certain blocks, but from the looks of it the SDD's are being designed with wear leveling in mind so I doubt even that will matter to the software.
Actually with proper software you'd probably like to do the opposite - try to wear out certain blocks as fast as possible. This way the lossage is more predictable and rest of the disk is kept in a better shape. Point being that bad sectors aren't really a big deal if you're prepared for those.
When NAND memory fails, it can fail in such a way as to make the ENTIRE flash memory device unreadable... this is from real world NAND memory devices failing from real world use, all of a sudden not wear leveling seems like a suicidal mode of wear... if the entire chip can short out from a single block failing.
"In case of a massive damage,
* If the device is not accessible at all (circuitry failure), no software can even attempt the recovery. Physical intervention is required.
Re:What about filesystems... (Score:5, Informative)
you should have said 'IANAEE' for i am not an electrical engineer. because the way NAND ram works it is entirely possible for failure to be a complete and total failure of the device, or at least 512 blocks of the device, if it doesn't create a short that prevents the whole device from working.
first today's flash memory is NAND memory http://en.wikipedia.org/wiki/Flash_memory [wikipedia.org]
NAND memory is written with tunnel injection, which causes charge carriers to be injected into a conductor. http://en.wikipedia.org/wiki/Tunnel_injection [wikipedia.org]
Charge carriers "In semiconductor physics, the traveling vacancies in the valence-band electron population (holes) are treated as charge carriers"http://en.wikipedia.org/wiki/Charge_carrier [wikipedia.org]
So we're using an electric charge, to fill, and create 'electron holes' in a conductor. what could POSSIBLY go wrong, in the real world, rapidly changing if a conductor has electron holes or not, by forcing the electrons in or out of the material
so trying to intentionally wear out a NAND memory chip can cause a severe problem whereby instead of creating an electron hole, you've created a short circuit. "A short circuit (sometimes abbreviated to short or s/c) allows a current to flow along a different path from the one intended." http://en.wikipedia.org/wiki/Short_circuit [wikipedia.org]
Large Blocks (Score:2)
Re: (Score:2)
Re: (Score:1)
YAFFS [wikipedia.org]
JFFS2 [wikipedia.org]
LogFS [wikipedia.org]
Re: (Score:3, Interesting)
YAFFS and JFFS2 look to me like they might be showing their age.
From Wikipedia:
"YAFFS2 is similar in concept to YAFFS1, and shares much the same code... The main difference is that YAFFS2 needs to jump through significant hoops to meet the "write once" requirement of modern NAND flash.
YAFFS2 now supports "checkpointing" which bypasses normal mount scanning, allowing very fast mount times. Mileage will vary, but mount times of c. 3 seconds for 2 GB have been reported.
Measuring mount times in seconds per gigabyte is not encouraging for the design goals we're talking about here. The disadvantages section of the JFFS2 article pretty well speaks for itself, but note
"All nodes must still be scanned at mount time."
Overcoming that hurdle was how YAFFS2 even moved up to the seconds per gigabyte range - the introductory paper for LogFS says
"On the authors notebook, mounting an empty JFFS2 on a 1GiB USB stick takes around 15 minutes. That is a little slower than most users would expect a ïlesystem mount to happen."
The developer's g
Re: (Score:2)
Re: (Score:2)
It does involve having to think about how you write a phrase sometimes, but means that everyone has a consistent interface, knowing what
Re:What about filesystems... (Score:5, Funny)
Re: (Score:1)
No, no one has even considered that yet. I'll alert the academic world while you clue in the industry.
While you've got those chaps on the line, there are a few other topics I wanted to bring up that probably nobody has considered.
Re: (Score:1)
I'm sure several algorithms that affect filesystem performance were written with the former's characteristics implicitly in mind.
Maybe some new approaches become plausible, given this underlying change? Historically, being the first to understand and exploit the advantages of a new technology makes a huge difference.
Yeah, Right (Score:2)
Yeah, right, just what I buy for my home system right now. The really high-end expensive stuff.
For nearly all of us, this isn't news until SSD is competitive at the consumer disc drive level.
And competitive means price and projected lifetime. Watching my SSD start dying in pieces after only weeks, or months, isn't current hard drive reliability.
Re: (Score:1)
Re: (Score:2)
Re:Yeah, Right (Score:4, Interesting)
Well. These drives (FC, SCSI, SAS) are 10% of the market, very lucrative, and quite important for data center operations, server rooms, and so forth.
Projected lifetime for modern SSD drives is now getting to the point where they are more likely to be discarded due to technological obsolescence than they are to significantly deteriorate, BTW.
The projected intersection curve is further than six years out for SATA SSD price parity. That's an eternity in technological time, which is to say, there is no predicting it.
Price per unit of storage is by far not the only deciding factor, even in the consumer market. Flash can scale up performance much more quickly than spinning media. You can expect flash performance to more than double annually from here on out, I would say. You would of course be right to be wondering how the SATA and SAS busses will keep up.
Look at FusionIO (http://www.fusionio.com) to see how flash will accelerate in performance. These devices have 160 internal channels in order to make the bytes flow at the rate they do. You can think of it as a sort of 160-wide RAID-0 striping mechanism.
$2400 for one card is of course way out of consumer space. However, point: 1) the cost of the flash in the system will drop to a fraction of its current price within two years, and 2) the ASICs on board this device will be "paid for" within the same period, allowing them to charge only a small fraction of their current price.
Expect other similar products to develop soon.
When FusionIO proves out the market for these devices--and mark my words, they will--competitors will follow in their footsteps, like bees drawn to honey.
C//
Re: (Score:2)
In fact, I doubt they will ever catch up in price, because HDDs will ALWAYS have to be cheaper to sell. There is no equatin
Where can you buy them? (Score:2)
I need four of 64GB or more. Price not important, but they must perform well and be reasonably reliable. SAS preferred.
Re: (Score:2)
Re: (Score:2)
http://www.fusionio.com/ [fusionio.com]
Re: (Score:2)
Re: (Score:2)
seriously I've been using the internet since 1994, and not once has 'an implied trailing dot' been mentioned to me anywhere, except in the wikipedia article (and apparently in the RFCs since wiki cites them)
i don't read RFCs ever, and apparently DNS resolvers automatically add a trailing dot, but this was the first time I'd ever heard of needing a trailing dot. and iron
Re: (Score:2)
In my defense, I never read RFCs either
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Have you even looked. I see at least three 64GB ones. and one 128GB. Price is their biggest disadvantage.
Re: (Score:2)
Re: (Score:2)
SSD Data Recovery (Score:1)
The article doesn't mention numbers in terms of power savings, but I'm looking forward to SSD-based RAID at the same power cost as a single Winchester HDD.
not only price, but density (Score:5, Insightful)
I think that density, not price, is going to drive the SSD market as well. We need space on our small computers, and the mechanical solution is not keeping up. I believe this is why Apple went to flash memory for the iPods, although initially they were dedicated to hard drives. My iPhod mini only has 4 gb, the same as the nano that replaced it. The new nanos have more memory than even the EOL minis. The microdrive, though a good tech, were not scaling. The larger physical size hard disks are now up to 160GB, but that is small for modern times in which many of us have a terabyte sitting on our home machine.
So I think we will pay for SDD prices if they give us more space. The problem right now is that we have more for a SSD drive, and get less space. We pay $1000 to Apple or practically anyone else for 64GB SSD. That is paying money for nothing. Wait until we can buy a Macbook Pro with a terrabyte SSD for $4000, or a Mac Book air with a 250GB SSD for $2000. Then we will be seeing the SSD laptops flying off the shelf.
Of course for low end machines many will stick with HDD for many years, just like people entered the 21st century still storing things on floppy. Of course this will hasten the downfall of HDD, as the cheap unreliable HDD will take an even bigger share of the market than they have today, and, just like today, users will attribute a high failure rate to a problem with the technology, and not that they chose to buy a cheap hard drive. With the last major mechanical part gone, computer will become much more reliable, just like when the stereos, for better or worse, left vacuum tubes behind.
I also hope that DVD drive as a standard goes away soon, and applaud Apple for making the Mac book air drive free. The main reason for a dvd drive, other than installing software, is because we cannot rip out DVDs to a more convinent format. I would much rather carry around a couple Flash Drives than a bag of DVDs. It would seem that in not too many years shipping software on USB dongles would be just as cost effective. Already 4GB flash cost less than $10.
Re: (Score:2)
Re: (Score:1)
I think that density, not price, is going to drive the SSD market as well. We need space on our small computers, and the mechanical solution is not keeping up. I believe this is why Apple went to flash memory for the iPods, although initially they were dedicated to hard drives. My iPhod mini only has 4 gb, the same as the nano that replaced it. The new nanos have more memory than even the EOL minis.
I'm pretty confident that Apple's reason for switching to solid state flash memory in their handheld electronics (and now/soon their laptops as well) was because iPods were notorious for mechanical failure as they are often put through quite a bit of physical abuse.
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
Another layer in the hierarchy (Score:1)
Hierarchy:
registers
cache
RAM
flash
hard disk
tape
Re: (Score:2)
Re: (Score:1)
SSDs will reach price parity on June 15th (Score:3, Informative)
HDD Array:
8 Seagate Savvio 2.5" HDDs: $350ea $2,800
configured raid-10
1 SAS raid controller $600
Total cost for 144 GB $3,400 or $23.61/GB
SSD Array:
6 Mtron 1025-32 2.5" SSDs: $290ea $1,740
configured raid-5
1 SATA raid controller $250
MFT Software License $1,250
Total Cost for 144 GB $3,240 or $22.50/GB
HDD Performance:
4K and 8K read IOPS: 250/2000 (single-threaded/multi-threaded)
4K and 8K write IOPS: 1200
SSD Performance:
4K read IOPS: 8000/48000 (single-threaded/multi-threaded)
8K read IOPS: 6000/36000 (single-threaded/multi-threaded)
4K write IOPS: 40000
8K write IOPS: 22000
These performance numbers are with the MFT driver in place. Without MFT, the 4K random write performance is about 140 IOPS (>250x slower).
Endurance for these SSDs in this configuration is good enough to overwrite the entire array with random data three times a day (500GB of random updates/day) for about five years.
These drives make a wicked mail server (EasyCo just moved one of it's mail servers mirrored to MLC flash and the difference is amazing).
Sorry for the blatant advert, but SSDs are here now.
Doug Dumitru
EasyCo LLC
http://managedflash.com/ [managedflash.com]
+1 610 237-2000 x2
I think there's space for both technologies (Score:2)
Why not PRAM? (Score:2)